On the usage of bool in method parameters

The number of times that I encountered a piece of code like the following is quite alarming:

Transaction transaction =
    TransactionFactory.CreateTransaction(
        true,
        true,
        false,
        true);

What do those true, true, false, true mean? We’ll need to view the method signature each time we need to understand what those four booleans represent!  This means that anyone trying to skim through your code will have a bad time. Sidenote: if you’re writing method calls with such parameters, you seriously need to consider re-thinking such calls.

Let’s try to improve the readability of that code by a bit

Transaction transaction =
    TransactionFactory.CreateTransaction(
        true /* postInSage */,
        true /* isPaidInFull */,
        false /* recurringTransaction */,
        true /* sendEmailReceipt */);

What did we do here? We added some comments next to each boolean so that when reading the code, the reader can quickly identify what each boolean signifies. Neat, we’ve improved the readability by a lot! Microsoft developers seem to like doing it this way; a quick look at .NET Framework Source will show you some good examples, such as here, here and here.

But, what happens in case the order of the booleans change? Apart from breaking functionality, the comments will not update to reflect the new API call. As they say, comments lie, code never does.

Instead of opting to document the parameter names with comments, C# offers the facility of naming your parameters. This means that you can choose to ignore the order of the parameters, but affix the name of the parameter before the actual value. Let’s apply it to our example.

Transaction transaction =
    TransactionFactory.CreateTransaction(
        postInSage: true,
        isPaidInFull: true,
        recurringTransaction: false,
        sendEmailReceipt: true);

That’s looking great! We can even improve a bit by defaulting all boolean arguments to false, thus we’ll only pass those booleans which should be true.

Now, the method signature will look like this:

CreateTransaction(
    bool postInSage = false,
    bool isPaidInFull = false,
    bool recurringTransaction = false,
    bool sendEmailReceipt = false) 

The method call with look like this

Transaction transaction =
    TransactionFactory.CreateTransaction(
        postInSage: true,
        isPaidInFull: true,
        sendEmailReceipt: true); 

We can also take a totally different approach and eliminate the use of boolean parameters and introduce Enums, specifically Enum flags. This means that when we call the CreateTransaction method, we’ll simply pass the required flags. In case you forgot, here’s a quick refresher on how it works.  It will look something of the sort:

Transaction transaction =
    TransactionFactory.CreateTransaction(
        TransactionFlags.PostInSage |
        TransactionFlags.IsPaidInFull |
        TransactionFlags.SendEmailReceipt);

Not bad! When you read that piece of code, you can easily identify any properties that should be taken into consideration when creating the transaction. We ended up eliminating the need of booleans, in favour of flags.

Does this means that booleans should never be used when dealing with parameters? Of course not. I just wanted to shed some light on the fact that there are better approaches than just writing and consuming APIs in a more readable fashion.

Handling uncaught exceptions in a SharePoint environment through customErrors in web.config

Handling exceptions in SharePoint environment is quite straightforward; it’s a out-of-the-box feature. Any unhandled exception is caught by the SharePoint environment and a pretty “Sorry, something went wrong” screen is shown.

Sorry, something went wrong exception
Generic error when an unhandled exception is caught

That’s all great! Things can get a bit tougher if exceptions occur outside the SharePoint context. How can this happen? In my case, it can happen when custom code is being loaded through an HttpContext, and some DLL might fail to load. In this case, there is no SharePoint context to catch the uncaught exception, thus it remains as an unhandled exception. By default, unhandled exceptions generate the Yellow Screen of Death.

Yellow Screen of Death
Yellow Screen of Death – Source

That’s not something we should be displaying, right? Of course not! First and foremost, we’re showing the actual code to the public! That’s never good. Even if we manage to turn off the details in such exceptions, it’s still terrible practice to show such screen to public.

Alright then, let’s use some custom page to show our error! This is where it gets a bit trickier. Sadly enough, we cannot safely and reliably meddle with the Global.asax file, to globally catch errors and handle them nicely. We need to thing of something else.

Whilst I was trying to tackle this issue, my obvious guess was to amend the web.config – more specifically the customError section. My first attempt at solving this was by creating a simple custom error page, putting it in my IIS folder and configuring IIS to load this page in the customError section. My web.config looked something like this:

<customErrors defaultRedirect="errorPage.html" mode="On" />

After trying that out, I was still stuck with the same Yellow Screen of Death. What was happening? It so happens that when an error is thrown, indeed a redirect to my custom page happens. But when this page tries to load, it attempts to re-load all my HttpModules and this failing and throwing a new exception all over again. This means that this approach could never work.

In order to solve this, I had to do some additional steps. The solution to this issue is to create a new Site in IIS, that is designed specifically to handle such erroneous cases. Let’s go through the steps in order to carry this out.

These steps assume that you are in possession of a functional SharePoint on-premise solution.

1) Create a friendly error page to display to the user when something goes wrong

In order to keep this as simple as possible, we’re just going to display a header that something went wrong, no fancy images or CSS. You could be a bit more creative if you want but for the sake of this blog, it’s fine.

<html>
<body>
<h1>Sorry, something went wrong</h1>
</body>
</html>

2) Create a new IIS Site to host our newly created HTML

This site will simply be used to host the newly created HTML. We can opt to create this site either on the same Application Pool or a new one; it does not really matter in this case. I’ll just follow the default configuration provided by IIS.

Adding a new site in IIS
Adding a new site in IIS

3) Map your newly created IIS site to a subdomain.

Since we still need to use the same external port, we need to make sure that our newly created IIS site is accessible from outside. We need to create bindings in IIS to make sure that this is OK. If you’re running HTTPS (please do), you’ll need a wildcard certificate and IIS 7.5 or higher.

siteBindings
Editing site bindings for our new IIS site

4) Assign permissions to Everyone on the newly created Site.

Creating the site is not enough, we need to allow Everyone to access the site. We’ll use one of the out of the box special identities provided; the “Everyone” identity. Therefore anyone will be able to access it.

Permissions
Step 1: Right click your site -> Edit Permissions

securityEdit
Step 2: In the Security tab, press Edit

addEveryone
Step 3: In this new window, hit Add. In the Object name, type “Everyone”

addReadPermission
Assign Read Permission to “Everyone”

5) Configure the customErrors tag in your SharePoint site to point to the newly created Site

All we need now is to point the customErrors tag to thew newly created site.


<customErrors defaultRedirect="https://error.mysharepointsite.com" mode="On" />

 

All done! Keep in mind that this page is used in very special circumstances: when exceptions occur and the SharePoint context has not kicked in. All other typical exceptions are to be handled by the default out of the box SharePoint behavior.

I’m a blogger, not a System Admin!

I had to make a simple choice when I started this blog: Do I want to be a blogger or a System Administrator. What does this mean? Let me elaborate.

As one might obviously assume, this is a blog powered by WordPress.org, a free tool for bloggers like me to get them online and blogging. But, opting for the free version means something else. It means that you need to download it, set up a server somewhere, install it, set it up, buy a domain, and do whatever else it’s required to get things running.

What does all the above mean? It means that you need to spend a couple of hours doing all that stuff, time which could have been spent writing a blog post, just like this one. Let’s not even mention the fact that such systems constantly need patching up. That sounds like a job for a System Administrator not a blogger. The limited time that I have between all the other things in my life, I’d rather spend it doing stuff that I actually like rather than installing package.

Enter Software as s Service (SaaS) – you pay some money per year and it’s all managed for you. This means that installation, deployment, patching and other System Administrator jobs are done for me. That sounds great, doesn’t it? But obviously, going down this route also has some drawbacks. Let’s quickly go through the pros and cons.

Pros

  1. There is almost no downtime between deciding to start blogging and actually blogging. When I purchased my subscription, everything was ready in seconds. Compared to a couple of hours do all the deployment, installations and configuration, I’m very grateful
  2. The hosts are (hopefully) resilient. Paying someone to host your blog means that you’re guaranteed that your blog will always be online without any (reasonable) downtime.
  3. Plugins / Themes are vetted – this means that any dodgy plugins are not available for me to install, thus putting blogs in risk of getting hacked.
  4. Systems are always up to date, without the interference of the end user. Keeping your system up to date is no joke, it means that you’re protected against vulnerable software. This also means that there is no headache making sure that plugins stop working after an update since this headache is handled by the host.
  5. It simply works. This might seem like an Apple advert but it’s true. No dealing with out of this world errors, just subscribe and get blogging!

Cons

  1. Such systems normally reduce the flexibility by the end user by a LOT. Since you do not have access to the underlying operating system, some custom settings simply cannot be applied.
  2. Themes and plugin installation will be limited by the ones provided by the host. But hey, that’s a good thing in the end of the day, since you’re provided peace of mind since these are vetted by the host. Hence why this is also listed in the ‘pro’ section.
  3. Last but not least, you have to pay! Granted, not a lot, but some people might be discouraged when they have to fork out money. This might be the biggest con for many people.

Anyway, in the end of day I concluded that it’s worth to fork out some money and be a bit restricted rather than have to think about every step myself. Although I’m losing some flexibility, I can spend my time writing, which is what I’d like in the end of the day.

 

I hate var and so should you!

Have you ever been writing a piece of code and ended up writing this?


var myReallyObject = GetMyReallyCoolObject();

If yes, please do a favor to everyone and remove that var. It’s not doing any favors to anyone; it’s just reducing the readability of code. If you’re not using a decent IDE, you can never know what that var actually is. In Visual Studio, you’ll need to hover on the var keyword to discover what it actually is. All it did was inconvenience to anyone who’s going to read your code.

Moreover, a var can be a ticking time-bomb, waiting to explode. How? Great Question! Let me illustrate.


private bool RandomFunc()
{
    var myRandomNumber = GetRandomNumber();
    return IsRandomNumberInRange(myRandomNumber)
}

private bool IsRandomNumberInRange(int num)
{
    return num > 10;
}

private bool IsRandomNumberInRange(double num)
{
    return num > 50;
}

private int GetRandomNumber()
{
    return 42;
}

If along the way, GetRandomNumber’s return type changed, the code will still build properly, but the functionality changes. Eliminating the use of var simply resolves this inconsistency. Obviously that’s a trivial piece of code but the rules still apply.

Of course, there are excellent uses of var. I use it when I’m trying some LINQ out and I could not be bothered to guess the return type of some really long LINQ operation, as illustrated.

Given a list of Students (Name and Mark)


List<Student> students = new List<Student> { new Student { Name = "Albert", Class = "1A" }, new Student { Name = "Herd", Class = "1A" } };

Doing an order by and group by on such operation will result in a complex object. I’d rather see


var grouped = students.OrderBy(student => student.Name).GroupBy(student => student.Class);

than see


IEnumerable<IGrouping<string, Student>> grouped = students.OrderBy(student => student.Name).GroupBy(student => student.Class);

I mean, adding that complex declared type provides no real business value; besides in this case we’re not interested in the specifics of the type.

Here’s my thoughts when you can use var:

  • When the return type is complex and we’re not really interested in the return type. Focus on the business part rather than the technical part.
  • When you initialize the variable in the same line using a new construct.
  • Anonymous Types.

Here’s my thoughts when should NOT use var:

  • When the overall readability of code is being impacted.(For me, this is the deal breaker)
  • When the variable is being initialized using a method call.
  • When having a var introduces the possibility of changes in behavior of the variable.

So please, don’t just used var for all your declared types; use it with common sense.

Can we be a bit more careful on how we use the Internal access modifier?

The other day, I was writing some SharePoint code, and I required a RunWithElevatedPrivileges call. This call is normally accompanied by the creation of a new SPSite and a new SPWeb objects. This is even demonstrated in the RunWithElevatedPrivileges MSDN excerpt, as shown below. What this code does and such, it does not really matter for the sake of this post.


SPSecurity.RunWithElevatedPrivileges(delegate()
{
    using (SPSite site = new SPSite(web.Site.ID))
    {
        // implementation details omitted
    }
});

This is all fine and good, but I’ve noticed that the project that I’m working on already contains loads of RunWithElevatedPrivileges and the accompanying creation of new SPSite and SPWeb; thus I thought that it would be great if I had access to an overload of RunWithElevatedPrivileges that provides a callback with SPSite and SPWeb as parameters rather than creating them myself. So I thought that this is probably offered by SharePoint but a quick look at the public SharePoint API shows that this does not exist.

Then I thought, how is this possible? This is a common use case; somewhere in the SharePoint API, this ought to exist. So, grabbing ILSpy, I’ve reflected the code and gave a quick look. Unsurprisingly, I’ve found the exact overload that I was looking for. Though, for some weird reason, it’s set to Internal, rather than public. Hold on a minute, why is this kind of API not public? This is not some kind of abstraction; it’s API that should be readily available for the developer.


// Microsoft.SharePoint.SPSecurity
internal static void RunWithElevatedSiteAndWeb(SPWeb originalWeb, SPSecurity.CodeToRunWithElevatedSite secureCode)
{
    if (originalWeb.CurrentUser != null && originalWeb.CurrentUser.ID == 1073741823 && !originalWeb.Site.HasAppPrincipalContext)
    {
        secureCode(originalWeb.Site, originalWeb);
        return;
    }
    SPSecurity.RunWithElevatedPrivileges(delegate
    {
        using (SPSite sPSite = new SPSite(originalWeb.Site.ID, originalWeb.Site.Zone))
        {
            using (SPWeb sPWeb = sPSite.OpenWeb(originalWeb.ID))
            {
                secureCode(sPSite, sPWeb);
            }
        }
    });
}

This made me think: can we be a bit more careful on how we use the Internal access modifier? I mean, I understand that portions of the code should be private, since such code will be only used in the same class to simplify the underlying code. But, API that are clearly useful by developers having a LOT of internal methods is a big no for me. It is clearly not adding business value to the API, just frustration to the end developer since he needs to re-implement (or copy) the same implementation in his solution.

Obviously, I am not saying that ALL Internal methods are badly designed; if this was the case, it would not exist at all. I’m saying that API developers should think twice before limiting API to internal, which can clearly be used 3rd party developers. Private methods are OK, but internal methods, I think one needs to be a bit more careful on how this is used.

Or..maybe this is just one of the many, many quirks of the SharePoint API.