GDPR and the Maltese Political Propaganda

Unless you’ve been living under a rock, GDPR is now in effect as of 25th May 2018. I want to discuss the impact of GDPR, with regards to the Maltese Political Propaganda. This new law brings up a lot of changes and rights to the end user – the below is a summary which should be easily understandable without any jargon.

Politicians CAN send you personalized letters through postal services

Politicians do have the right to obtain your home address through three main channels: Electoral Register, Online Directory and Printed Directory. Though, these letters must have clear instructions on how one can easily opt out of these personalized letters and they must provide a method to do so, such as a mailing address, email or contact number.

Politicians CAN call you on your telephone / mobiles

Politicians do have the right to obtain your telephone / mobile again through three main channels: Electoral Register, Online Directory and Printed Directory. Though, the caller must respect the fact that you can ask them to hang up and remove your personal details so that they do not contact you again. One can also opt out of the directories in order to hide their telephone / mobile number, through their phone operator.

Politicians CAN contact you on election day to encourage you to go out and vote

Politicians can contact you using different mediums, which may be obtained from the following: Online Directory, Printed Directory public Social Media information or any publicly made information that the end user has made public himself. Of course, you can ask whoever is contact you to stop any communications and immediately delete your personal information.

Politicians CANNOT send you e-mails UNLESS you have given them an explicit consent and your email directly

In order for a politician to obtain your e-mail address, you must give it to them. This is typically done through direct political channels, such as propaganda websites. Also, these channels must provide an easy way to opt out of these emails, such as a unsubscribe link, which is clearly labelled and visible appopriatelty. The unsubscribe link must be send with each and every email.

Politicians CANNOT send you SMSs UNLESS you have given them an explicit consent and your phone number directly

In order for a politician to obtain your phone number, you must give it to them, exactly like the previous case with e-mails. With each SMS that the subscribed user receives, the sender must provide a contact number or email with each and every SMS sent, which the receiver can use in order to opt out of such SMSs.

3rd Party contractors which may send Propaganda SMSs (such as Go / Vodafone) MUST ensure that the politican has obtained the proper consent of the end user

If the politician does not provide clear and legitamate proof of consent to the 3rd Party contractor, the contractor must refuse to send propaganda SMSs to end users.

Liking / Following a politicians’ account on Social Media means that you’re giving him consent in order to obtain your PUBLIC Social Media information, such as e-mail or phone number.

Any information that you publicly post on Social Media can be accessed and used for political propaganda, by simply liking / following a page / account.

Some additional information:

  1. You ALWAYS have the right to opt-out of any propaganda messages, whether’s it’s postal, email, SMS or any other means, even if it’s something that has not been invented yet.
  2. Consent is ALWAYS needed for electronical communications, whether it’s email, or some futuristic electronic holographic message, consent is ALWAYS needed.
  3. Personalized letters can be sent through any means, be it postal services, or some fancy pigeon delivery. Keep in mind that you can always opt out of these letters, but by default, you’re in.
  4. If you’re tired of receving door-to-door political propaganda, you can stick a “no junk mail” message with your letterbox, and they must respect this fact.
  5. Politicians CANNOT charge you money to opt you out of any communication

Note that I’ve omitted some less important information in this article, such as dealing with Automated Calling Machines. All this information has been obtained from the Office of the Information and Data Protection Commissioner. If you feel that your rights have been violated, you can submit a report here.
Note by all means this article is not a legal advice, merely a guideline on your rights.

Source of article is here

Google Hash Code 2018 solution and source code – 1st in Malta and top 20% worldwide

Our solution: https://github.com/albertherd/Hash-Code-2018-Team-Stark

I recently had the honour to team up with my friends (and co-workers) to have a crack at the Google Hash Code 2018 online qualification round. We went to compete with little to no expectation. In fact, by the first two hours of the competition, we had almost nothing!

Google Hash Code is a team based competition, where teams all over the world gather together to participate and solve a Google engineering problem, using a language of their choice.

The problem was: Given a set of rides and a number of cars in a city grid, assign rides to cars, and serve the rides in a restricted time. There are also bonuses if you manage to start the ride as early as possible. Here is the Problem statement, input data set A,  input data set B input data set C  input data set D,  input data set E,

Our solution is not the cleanest solution, I must admit. One has to keep in mind that when undertaking these competitions, the biggest challenge is to come up with a decent solution the problem in a very short amount of time, rather than creating a complex solution which does solves the problem perfectly.

However imperfect, hacks-infested and dirty, it managed to place us first in the Maltese islands, and in the top 20% worldwide (more specifically, 895th).

Capture

Our solution does not make use of a Genetic Algorithm; we tried to create a Greedy Algorithm with several optimizations in order to boost the quality of the output. Optimizations include:

  • Having the main loop revolve around steps, rather than cars or rides (much like a game engine main loop).
  • Sort all the initial rides by start time.
  • Discard any rides that have been already undertaken.
  • When trying to pick out a ride, we make sure that given current time (or step), we check if the ride can actually be served on time. Using this thought, we’ve managed to produce output with 0 late rides across the board!
  • Get the first feasible N rides – sort them out by closest, then by travelling and undertake that ride. Since each ride yielded the same bonus, irrespective of the distance traveled, the shortest rides were always the most profitable.
  • Other minor optimizations.

There are multiple optimizations and enhancements that we wished to include, such as removing those impossible rides from the data set as we go along, and distributing each car / ride according to a “zone”. In fact, the code does have a Zone class, which never got used.

Have a look at our solution : https://github.com/albertherd/Hash-Code-2018-Team-Stark

Until the next one!

Performance differences when using AVX instructions

Download source code from here

Recent news on exploits on both Meltdown and Spectre got me thinking and researching a bit more in depth on Assembly. I ended up reading on the differences and performance gains when using SIMD instructions versus naive implementations. Let’s briefly discuss what SIMD is.

SIMD (Single instruction, multiple data) is the process of piping vector data through a single instruction, effectively speeding up the calculations significantly. Given that SIMD instruction can process larger amount of data in parallel atomically, SIMD does provide a significant performance boost when used. Real-Life applications of SIMD are various, ranging from image processing, audio processing and graphics generation.

Let’s investigate the real performance gains when using SIMD instructions – in this case we’ll be using AVX (Advanced Vector Extensions), which provides newer SIMD instructions. We’ll be using several SIMD instructions, such as VADDPS. VSUBPS, VMULPS, and VDIVPS. Each instruction is responsible for adding, subtracting, multiplying and dividing single precision numbers (floats).

In reality, we will not be writing any Assembly at all, we’ll be using Intrinsics, which ship directly with any decent C/C++ compiler. For our example, we’ll be using MSVC compiler, but any decent compiler will do. The Intel Intrinsics Guide provides a very good platform to look up any required intrinsic functions one may need, thus removing the need to write Assembly, just C code.

There are two benchmarks for each arithmetic operation: one is done naively and one is done using intrinsics thus using the necessary AVX instruction. Each operation is  performed 200,000,000 times thus to make sure that there is enough time to demonstrate it for a benchmark,

Here’s an example of how the multiplication is implemented naively:

void DoNaiveMultiplication(int iterations)
{
    float z[8];

    for (int i = 0; i < iterations; i++)
    {
        z[0] = x[0] * y[0];
        z[1] = x[1] * y[1];
        z[2] = x[2] * y[2];
        z[3] = x[3] * y[3];
        z[4] = x[4] * y[4];
        z[5] = x[5] * y[5];
        z[6] = x[6] * y[6];
        z[7] = x[7] * y[7];
    }
}

Here's an example of how the multiplication is implemented in AVX:

void DoAvxMultiplication(int iterations)
{
__m256 x256 = _mm256_loadu_ps((__m256*)x);
__m256 y256 = _mm256_loadu_ps((__m256*)y);
__m256 result;

for (int i = 0; i < iterations; i++)
{
result = _mm256_mul_ps(x256, y256);
}
}

Finally, let's take a look on how the results look:

naivevsavx

 

AVXPerformanceGains
Performance gains when using AVX

From the graph above, one can see that when optimizing from naive to AVX, there are the following gains:

  • Addition: 217% faster – from 1141ms to 359ms
  • Subtraction: 209% faster – from 1110ms to 359ms
  • Multiplication: 221% faster- from 1156ms to 360ms
  • Division: 300% faster – from 2687ms to 672ms

Of course, the benchmarks show the best case scenarios; so real-life mileage may vary. These benchmarks can be downloaded and tested out from here. Kindly note that you’ll need either an Intel CPU from 2011 onwards (Sandy Bridge), or an AMD processor from 2011 onwards (Bulldozer)  in order to be able to run the benchmarks.

 

 

UPDATED: Intel and its flawed Kernel Memory Management Security

It has emerged that Intel CPUs made in the last decade or so are missing proper checks when it comes to securing Kernel Memory. It would seem that through special (undocumented) steps, a User-Mode application can peek and make changes to Kernel-Mode Memory. This means that any application, such as your browser, can access and change your system memory.

Some theory

In the 32-bit era, an application could typically access up to 4GB of RAM; this has been de-facto for ages. What really happened is that the application had access to 2GB of for User-Mode memory (used to typically hold the memory needed by the application to function). The other 2GB is mapped to Kernel space, containing memory locations for Kernel-Mode memory.

In the 64-bit era, these memory limitations were lifted since a 64-bit architecture can access such a larger address space (16 exabytes, to be exact). Given that the Kernel-Mode memory is so much larger (248TB), the OS can randomly place it anywhere it pleases, randomly. This randomness (Address space layout randomization) successfully makes it so much harder for foul-playing applications to find the addresses of Kernel-Mode functions.

So, what’s happening?

Typically the code that runs in User-Mode (typical code) does not have access to the Kernel-Mode memory. The reason why this is done is so when an application switches to Kernel-Mode (needed for example to open a file from disk), the Kernel-Mode memory would still be accessible, avoiding the needed to have 2 memory tables, one for User-Mode and one for Kernel-Mode. Having more than one table will mean that during every sysenter (or equivalent), tables will need to be swapped, cache needs to be freed and any overhead that such operations require.

It would seem that on Intel CPUs, hackers have found a way to bypass this security feature. This means that a User-Mode application can now access Kernel-Mode memory; which is devastating. A User-Mode application can apply small changes to the Kernel and change its functionality. Since an application has access to Kernel memory, a hacker can basically do whatever he pleases with the target’s system.

How can this be fixed?

Unfortunately, an easy fix is not available. The whole memory management logic needs to be re-written, so that instead of having just one memory table, which maps both User-Mode and Kernel-Mode memory, an additional table will hold the Kernel-Mode memory; this table will be only accessible from Kernel-Mode memory. The change is being dubbed as Kernel page-table isolation (KPTI, known as KAISER).

Adding a new memory table and switching to-and-fro has negative effects on the overall system performance, especially in I/O heavy applications. The reason is that I/O involves a lot of User-Mode and Kernel-Mode switching. Given that the new code needs to run every time the system switches from User-Mode to Kernel-Mode. performance degradation are expected. Unofficial figures quote between 5%-30% performance impact, depending on the application. OC3D has provided some benchmarks; FS-Mark (I/O benchmark) show a devastating hit in performance. PostgreSQL said that there is a best case of 17% slowdown, worst case of 23% using new new Linux patch.

Which operating systems are vulnerable?

Basically, all Operating systems are vulnerable to this hack. this is because this is a bug that goes beyond the operating system, since it lives on the CPU rather than on an operating system level. Scary! Vendors have been (secretly) informed of this issue and are working on fixing the vulnerability:

Are non-Intel CPUs vulnerable?

All we know at the moment is that AMD CPUs are NOT vulnerable. This has been confirmed by AMD themselves. In fact, Tom Lendacky from AMD has issued a fix for the Linux kernel itself, adding a check so that if the CPU is AMD, the mitigation is not applied.

What’s next? How can I stay safe?

If you got an AMD CPU, well then congratulations, you’re safe! If you’re on an Intel System, don’t panic just yet. Yes, you are vulnerable, but yes, you still control what you do with your computer. If you don’t visit dodgy websites and don’t install dodgy applications, you’ll remain safe. But that’s age-old advice.

 

 

BOV, you’re a good bank, but your app SUCKS!

As Christmas dawns on us, most of us go out to do the usual Christmas shopping. As of this year, I’ve decided to ditch using cash (where possible) and only use bank cards when shopping. Sounds great right? Well, almost.

I am a BOV customer and I got two main accounts with them: a Cashlink account (where my wage is deposited) and another account, which has a VISA account bound to it. Normally, I only put the money that I’ll be spending on this account, as a safety measure. This means that every time that I need to do a purchase, I use the BOV app to do the transfer. This is where it quickly goes south.

So, last time I was at Valletta doing some shopping, I went to transfer some money to my VISA account. I fired up my BOV app, logged in and bam, “Application Error”. Let’s try again – but now, when I logged in, I got disconnected from my Wi-Fi and got connected on 4G. Great, the connection dropped again! Third time’s the charm! I re-logged in, and was greeted by the beautiful “you are already logged in” message. Needless to say, I had to wait again, switch off my Wi-Fi to make sure I don’t connect to somewhere else and then actually manage to to the transfer. Far more painful than it should be!

Anyway, this was not the first time that I’ve had this issue. Actually I’m surprised when it works the first time round! I’ve been talking about this with my friends on how painful it is and one of my friends introduced me to Revolut. Basically, it’s an online bank which provides you with a very good mobile app and a MasterCard. I’ve ended up replacing my BOV VISA with the MasterCard from Revolut, and I regret nothing. There are multiple advantages to using a alternative bank, but for the scope of this blog, it’s a bit irrelevant.

So, what can BOV do to their app to win me back? Because, in reality I’d still rather do all my banking with BOV rather than an alternative bank, but BOV has so much work to do beforehand.

1) I don’t care if I’m logged in from another location

Why is this a feature in the first place? If I want to manage my money from my BOV app and BOV Internet Banking, so be it! Anyway, this issue happens because you’re disconnected from the BOV app before hitting logout. This is the worst issue of them all if I’m honest.

2) The app is slow

It seems that the app is always sluggish. Why does it take 5-10 seconds to log in and get my balance? Coming to think of it, every screen transition takes about 5-10 seconds! Hey, at least they fixed the issue of waiting a minute on “Checking Security” popup. That wasn’t fun!

3) The UI is STUPID

I got a full HD (1080*1920) screen, but it seems that the BOV app can only use 7% of the screen to place the username and key fields. Why are they so tiny? Even worse, it seems that allowing multiple users to log in from the same app is a bit daft in the first place; it should only ask for my password (and remember my user).

4) The UI is dated

I get the feeling that this application was designed when Android was still in version 2 or 3. The UI is very dated – the way that settings appear reminds me of old Android. By the way, why is the settings tab activated by a tiny hamburger icon, when clearly there’s a LOT of space available?

5) Why am I not allowed to make payments to any IBAN I desire?

I can make payments to my friends (if I got their mobile number), some list of hardcoded shops (I assume vetted by then), top up my phone and that’s it! If I need to make a payment to some IBAN, I cannot do it through the app; I’ll need to use their online portal.

6) Where is fingerprint authentication?

I assume the answer is “they can’t be bothered”. Obviously, since they haven’t done any decent update to the app since like forever, this feature is stuck in oblivion. Before I got my Revolut account, I never appreciated the comfort of logging in using your fingerprint, and believe me, it works GREAT.

7) I’d like more fine-grained security control my cards

We live in 2017, but it seems that BOV is living in 1017. Why I am not allowed to turn my VISA on and off on the fly? While you’re at it, I’d like fine-graned control on swipe payments, ATM and online payments please.

8) Where are the contactless cards?

This is a good one as well, but not related to the app per-se. It seems that although BOV has rolled out contactless cards, I haven’t got one, why’s that? I’ll just assume that I need to go through some hoops and whatnot to get my hands on one. Contactless is AWESOME by the way.

9) Why do pre-authorization payments take days to appear?

I’ve always wondered this. Sometimes payments are pre-authorized rather than an instant withdrawal. When these payments occur, I can only notice this because my book balance is different (less) then my available balance. They’re not written in the statement, for some weird reason. My Revolut account does this and it also notes that this transaction is awaiting confirmation or reversal.

10) Some other features that I’d like

  • A filterable statement, or at least sorted by month
  • Split Bill
  • Request money
  • Freeze card / account

I’m pretty sure that there are a million other things I can nitpick on, but that’s all for now. Until the next one

I hate it when my laptop’s fan switches on – here’s how I solved it (Part 1)!

I’ve made it a point that I’d buy my laptop equipped with a Intel U-Based – this is to make sure that my laptop is as light, power efficient and quiet as possible. My HP Spectre X360 does all of this; well almost. It’s light (around 1.3kg), power efficient (8-10 hours of battery plus), but is not the quietest laptop on the planet.

When the laptop has a relatively moderate task to process, it ramps up the CPU to full (3.5 Ghz). That’s great, except for the fact that high clocks generate a lot of heat. When the threshold temperature is constantly exceeded (in my laptop’s case, around 50c), the fan needs to kick-in in order to manage thermals.

There’s nothing wrong with that; the laptop functions perfectly. What I’d like is to do all these tasks, whilst the laptop remains cool and will only require passive cooling. How can this be achieved? By lowering the maximum CPU Clock, of course!

What I ended up doing is setting up the maximum CPU usage to 45% (at around 1.6 Ghz), instead of 100%. This means that tasks run slightly slower, but meaning that the laptop runs way cooler. Even better, most of the time, the performance cost is not felt since the tasks do not actually max the CPU usage; thus a lower CPU clock is sufficient!

For now, I’ve solved it naively – setting up this value as a fixed value is not the most efficient. There are times that my laptop is running well below under the threshold temperature where the fan needs to kick-in. A more intelligent solution is to adjust the temperatures on the fly, so that the laptop maintains a target temperature, much like how NVIDA’s GPU Boost works.

This is very easy to set up – this can be accessed through the Windows Power Options. Here’s a step by step guide.

Power Options
1) Right click the battery icon – select Power Options

 

Change Plan Settings
2) Select your desired power plan and select Change plan settings

 

Change Advanced Power Settings
3) Select Change Advanced Power Settings

 

Max Processor State
4) Scroll down, open Processor power management, open Maximum processor state, and type your maximum value. (Eg 45%)

That’s it! Next time, we’ll see how we can do all this programmatically, through WinAPI.

Until the next one.

Security by Obscurity – in real life!

We were discussing security by obscurity in the office today – it’s always a topic that we end up having a laugh at. If you have no idea what I’m talking about, read about security by obscurity here.

That’s all fine and funny, until you witness it. Us Maltese just witnessed it, last weekend, with a twist. Instead of being in some poorly written software, this was in a shop. Basically, a local Jewellery shop was robbed by professionals and they removed / deleted all security footage in the process!

You might say that this is not IT related – but I’m afraid that it’s very relevant. This got me thinking – how did they get access to the security footage? Was it there, exposed, just waiting for some person to meddle and delete with the footage? It seemed that these people thought so. Although I don’t have much details on how this was done, I would assume that these shops don’t have another site where these footage are kept just in case accidents like these happen.

So, what do I propose? Simple – it’s a bit illogical to keep the security footage at the same site where it’s being recorded. Ideally, this footage would be (instantly) moved to some off-site storage, making use of the cloud. Is there any provider doing this? A quick Google Search says yes: I’ve found examples such as CamCloud. Of course, I have no idea what the company offers since I’m not affiliated with it.

Given that today’s world is moving to the cloud, I can’t help but wonder if incidents like these can be mitigated by using such cloud services.

Examining the Ensure Pattern

The ensure concept is a programming practice that involves calling a method called EnsureXxx() before proceeding with your method call which deals with two main usages: security and performance.

Security

Let’s start by discussing the security usage of the Ensure pattern. When you enter a piece of code that can be considered critical, one needs to make sure that the code can proceed safely. Thus, before executing any call that requires such safety, an Ensure method is called.

Typically, this method will check the state that the code is currently is running is valid (what defines valid is up to the program itself) and any properties are sanity checked. In case that the state or properties are invalid, the code will simply throw an exception and the execution is immediately stopped.

A typical signature of this method will not accept any parameter and return void, such as EnsureAccessToDatabase(). Such method will make sure that the application is in the correct state and any properties (such as the connection string) are properly set.

Performance

The second usage of the Ensure pattern is performance. Many times, creating new objects will create internal dependencies which may be expensive to create. Even worse, it might be the case that the code only makes use of a portion of such objects and ends up not using the expensive dependencies. In order to circumvent this, any performant code will delegate the creation of expensive objects until they are needed.

Let’s consider an example – let’s say we have an object that may require a database access if certain operations are executed. These certain operations would implement a call such as EnsureDatabaseConnection(), which would check if the database connection exists and opens it if it does not.

The second usage is a but obsolete nowadays though – given the introduction of the Lazy<T> class nowadays, it makes more sense to wrap your deferred instances in a Lazy<T> rather than in an Ensure method. The Lazy provides native multi-thread initialisation support which you will have to do manually in an ensure pattern.

In real world applications, I still use the security component of the pattern though; it’s a very clean way to do security and sanity checks in your code, without becoming bloated.

Until the next one!

Exception Filtering in C#

What do you do when you have a piece of code that can fail, and when it fails, you need to log to a database? You wrap your code in a try-catch block and chuck a Log call in the catch block. That’s all good! What if I tell you that there is a better way to do it?

try
{
    // Code that might fail
}
catch(Exception ex)
{
    // Handle
    // Log to database
}

What’s the problem with the typical approach?

When your code enters a catch block – the stack unwinds. This refers to the process when the stack goes backwards / upwards in order to arrive the stack frame where the original call is located. Wikipedia can explain this in a bit more detail. What this means is that we might lose information with regards to the original stack location and information. If a catch block is being entered just to log to the database and then the exception is being re-thrown, this means that we’re losing vital information to discover where the issue exists; this is especially true in release / live environments.

What’s the way forward?

C# 6 offers the Exception Filtering concept; here’s how to use it.

try
{
    //Code
}
catch (FancyException fe) when (fe.ErrorCode > 0)
{
    //Handle
}

The above catch block won’t be executed if the ErrorCode property of the exception is not greater than zero. Brilliant, we can now introduce logic without interfering with the catch mechanism and avoiding stack unwinding!

A more advanced example

Let’s now go and see a more advanced example. The application below accepts input from the Console – when the input length is zero, an exception with code 0 is raised, else an exception with code 1 is raised. Anytime an exception is raised, the application logs it. Though, the exception is only caught if only if the ErrorCode is greater than 0. The complete application is on GitHub.


class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            new FancyRepository().GetCatchErrorGreaterThanZero(Console.ReadLine());
        }
    }
}

public class FancyRepository
{
    public string GetCatchErrorGreaterThanZero(string value)
    {
        try
        {
            return GetInternal(value);
        }
        catch (FancyException fe) when (LogToDatabase(fe.ErrorCode) || fe.ErrorCode > 0)
        {
            throw;
        }
    }

    private string GetInternal(string value)
    {
        if (!value.Any())
           throw new FancyException(0);

        throw new FancyException(1);
    }

    private bool LogToDatabase(int errorCode)
    {
        Console.WriteLine($"Exception with code {errorCode} has been logged");
        return false;
    }
}

 

1st Scenario – Triggering the filter

In the first scenario, when the exception is thrown by the GetInternal method, the filter successfully executes and prevents the code from entering the catch statement. This can be illustrated by the fact that Visual Studio breaks in the throw new FancyException(0); line rather than in the throw; line. This means that the stack has not been unwound; this can be proven by the fact that we can still investigate the randomNumber value. The Call Stack is fully preserved – we can go through each frame and investigate the data in each call stack.

1

2nd Scenario – Triggering the catch

In the second scenario, when the exception is thrown by the GetInternal method, the filter does not handle it due to the ErrorCode is greater than 0. This means that the catch statement is executed and the error is re-thrown. In the debugger, we can see this due to the fact that Visual Studio break in the throw; line rather than the throw new FancyException(1); line. This means that we’ve lost a stack frame; it is impossible to investigate the randomNumber value, since the stack has been unwound to the GetCatchErrorGreaterThanZero call.

2catch

What’s happening under the hood?

As one can assume, the underlying code being generated must differ at an IL level, since the stack is not being unwound. And one would assume right – the when keyword is being translated into the filter instruction.

Let’s take two try-catch blocks, and see their equivalent IL.


try
{
    throw new Exception();
}
catch(Exception ex)
{

}

Generates

 .try
 {
     IL_0003: nop
     IL_0004: newobj instance void [mscorlib]System.Exception::.ctor()
     IL_0009: throw
 } // end .try
 catch [mscorlib]System.Exception
 {
     IL_000a: stloc.1
     IL_000b: nop
     IL_000c: nop
     IL_000d: leave.s IL_000f
 } // end handler

The next one is just like the previous, but it introduces a filter to check on some value on whether it’s equivalent to 1.


try
{
    throw new Exception();
}
catch(Exception ex) when(value == 1)
{

}

Generates

.try
 {
     IL_0010: nop
     IL_0011: newobj instance void [mscorlib]System.Exception::.ctor()
     IL_0016: throw
 } // end .try
 filter
 {
     IL_0017: isinst [mscorlib]System.Exception
     IL_001c: dup
     IL_001d: brtrue.s IL_0023
     IL_001f: pop
     IL_0020: ldc.i4.0
     IL_0021: br.s IL_002d
     IL_0023: stloc.2
     IL_0024: ldloc.0
     IL_0025: ldc.i4.1
     IL_0026: ceq
     IL_0028: stloc.3
     IL_0029: ldloc.3
     IL_002a: ldc.i4.0
     IL_002b: cgt.un
     IL_002d: endfilter
 } // end filter
 { // handler
     IL_002f: pop
     IL_0030: nop
     IL_0031: nop
     IL_0032: leave.s IL_0034
 } // end handler

Although the second example generates more IL (which is partly due the value checking), it does not enter the catch block! Interestingly enough the filter keyword, is not available in C# directly (only available through the use of the when keyword.

Credits

This blog post would have been impossible if readers of my blog did not provide me with the necessary feedback. I understand that the first version of this post was outright wrong. I’ve taken feedback received from my readers and changed it so now it delivers the intended message. I thank all the below people.

Rachel Farrell – Introduced me to the fact that the when keyword generates the filter IL rather than just being syntactic sugar.

Ben Camilleri – Pointed out that when catching the exception, the statement should be throw; instead of throw ex; to maintain the StackTrace property properly.

Cedric Mamo – Pointed out that the logic was flawed and provided the appropriate solution in order to successfully demonstrate it using Visual Studio.

Until the next one!

Code never lies, Documentation sometimes does!

Lately, I was working on a Windows Service using C#. Having never done such using C#, I’d thought that I’d go through the documentation that Microsoft provides. I went through it in quite a breeze; my service was running in no time.

I then added some initialisation code, which means that service startup is not instant. No problem with that; in fact the documentation has a section dedicated for such. The user’s code can provide some status update on what state the initialisation is. Unfortunately, C# does not provide this functionality; you’ll have to call native API to do so (through the SetServiceStatus call).

As I was going through the C# documentation of the struct that it accepts, I noticed that it does not match up with the documentation for the native API. The C# says that it accept long (64-bit) parameters, whilst the native API says that it accepts DWORD (32-bit) parameters. This got me thinking; is the C# documentation wrong?

I’ve whipped up two applications: one in C++ and one in C#. I checked the size of, in bytes, of the SERVICE_STATUS that SetServiceStatus expects. The answer was 28 bytes, which makes sense given that it consists of 7 DWORDs (32-bit) – 7 * 4  = 28 bytes.

size_t sizeOfServiceStatus = sizeof(SERVICE_STATUS);
cout << "Size: " << sizeOfServiceStatus << endl;

The C# application consists of copying and pasting the example in Microsoft’s documentation. After checking out the ServiceStatus struct’s size, it showed 56! Again, this was not surprising, since it consists of 6 long (64-bit) plus the ServiceState enum (which defaults to int, 32-bit) plus an additional 32-bit of padding – (6 * 8) + 4 + 4 = 56 . Therefore, the resultant struct is 56 bytes instead of 28 bytes!

int size = Marshal.SizeOf(typeof(ServiceStatus));
Console.WriteLine("Size: " + size);

Unfortunately, this will still to appear to work in the code, but obviously the output of this function is undefined since the data that’s being fed in is totally out of alignment. To make matters worse, pinvoke.net reports this as Microsoft do, which threw me off in the beginning as well.

Naturally, fixing this issue is trivial; it’s just a matter of converting all longs to uint (since DWORD is an unsigned integer). Therefore, the example should look like the following:

public enum ServiceState
{
SERVICE_STOPPED = 0x00000001,
SERVICE_START_PENDING = 0x00000002,
SERVICE_STOP_PENDING = 0x00000003,
SERVICE_RUNNING = 0x00000004,
SERVICE_CONTINUE_PENDING = 0x00000005,
SERVICE_PAUSE_PENDING = 0x00000006,
SERVICE_PAUSED = 0x00000007,
}

[StructLayout(LayoutKind.Sequential)]
public struct ServiceStatus
{
public uint dwServiceType;
public ServiceState dwCurrentState;
public uint dwControlsAccepted;
public uint dwWin32ExitCode;
public uint dwServiceSpecificExitCode;
public uint dwCheckPoint;
public uint dwWaitHint;
};