You don’t need more than 1080p on a 13″ screen!

Recently, I’ve been on the market to buy a new 13″ Laptop. I ended up buying a HP Spectre x360: i7, 8GB RAM, 1080p touch screen and the usual gizmos. I’ll talk about the huge headache I went through (not counting the hours spent searching reviews) in order to actually determine what I’m going to buy.

I was quite sure on what I wanted – a lightweight 13″ laptop with an i7 and 8GB of RAM and stuff like that. In other words, a really portable machine which won’t slow me down on the go. There were several contenders in this department, the Dell XPS 13, Lenovo Yoga 910, Razer Blade Stealth, the aforementioned HP Spectre x360  and some others which were quickly eliminated from the list. The biggest question was always : 1080p or 4K screen?

People had mixed feelings about this, some said go for 1080p and some said 4K. Here are my thoughts on this subject. Oh, by the way – this argument is only for Windows Based laptops. This does not apply for non-Windows based machines.

Let’s start by the biggest problem that screen size carries. If the pixel count grows and the screen does not, this means that the actual pixel size gets smaller. So, this means that a 300 pixels on a 13″ 1080p might be 4cm long, but 300 pixels on a 13″ 4k might be just 1 cm long. Most (older) applications were designed to work with pixels, so they do not cater for big resolutions on small screens.

Fortunately, Microsoft have realised this problem and provide a feature to scale the size of the display accordingly. So, old applications will scale up to the appropriate size, but this comes at a cost. Most of the time, the bigger the scale, the blurrier the window will actually look; I’ve illustrated this below. One can “clearly” see that the D is quite blurred out.

Scalingblurring

This problem is acknowledged by Microsoft themselves and provide some workarounds for this. Fortunately, as time goes on, more and more applications are being designed with this problem in mind and scale quite nicely. Also, the new UWP applications (such as the new looking applications on Windows 10 – Settings, Calculator and such handle this problem natively; they will not suffer from these problems.

In my case, my 1080P 13″ display came configured out of the box to use 150% scaling. This means that applications that do not handle such scaling will be multiplied by 1.5 times in order to scale appropriately. So the problem with scaling and blurring already exist with a 1080P display, let alone a 4K display! Those apps which scale poorly will simply exhibit worse symptoms since the scaling needs to be bigger at a 4K resolution.

This problem also exists in games; Linus played Half Life on a 16K monitor; scaling was just laughable.

My end verdict? If you’re buying a Windows-Based machine, don’t opt for a 4K on a 13″ display. It will make the scaling problem just worse. Let’s just hope for a better future where all applications scale correctly! I hope I’ll save some time and headache for anyone who is in the market for a 13″ laptop.

I have not mentioned too much technical details on what actually is going on; I do not want to confuse potential non-technical readers. This post will be followed up by a technical blog post explaining what is actually going on and as a programmer, how to program against this problem. If interested though, the problem mostly lies in the domain of DPI and DIP.

Program something different during the weekend

If you, the reader, are like me, chances are you spend your fair share of your time programming during the weekend. It’s in us; it’s a passion. But, I believe that some of us are doing it wrong. Some people work on the same line of technologies during the weekend as they do during the week. They do not expose themselves to new technologies; always stuck with the same comfortable boundaries. It’s time to push yourself.

There is nothing wrong by doing programming during the weekend. For me, it’s an itch that needs to be scratched. Though, I try to avoid using technologies that I use at work, to expose myself to the ever changing world of programming. Sometimes, it’s not easy to expose yourself to new technologies. There are several barriers that hinder this.  Here are a few.

It’s a new programming language

Chances are that if you’re trying a new technology, it’s backed up by a different programming language. This means that you’ll likely to get stuck in very trivial problems, such simply forgetting syntax or lacking the knowledge of the underlying APIs. You’ll end up re-implementing features that probably already exist and provided natively by the language’s supporting libraries. That’s OK though, you’ll likely to end up Google-ing problems whilst doing so, and learning new techniques whilst doing so.

It’s a new programming paradigm

This is a bit tougher. You’ll be leaving the typical train of thought that you usually think with. A typical example is a C# / Java developer having a crack at some C programming. Although C# / Java are indeed influenced by C, they live in a separate programming paradigm. C# / Java are object-oriented languages, whilst C is a procedural language. You’ll need to think quite differently when programming in these languages.

It’s a different programming genertion

This is similar to the point above, but simply different classification. One might work a lot with 3GL languages, such as C# / Java / C or your typical run-off-the-mill language. You want to have a crack at some good SQL. It’s a different programming generation on it’s own. The definition might be a bit stumped, but the differences certainly exist. 3GL languages deal with general-purpose langues and 4GL deal with table structures. One is not meant to replace the other; they are simply complementing each other.

It’s a different application type

Most of us developers normally work targeting a type of application such as Web Applications or desktop applications. Designing an application to target any one of these types require a very different train of thought. Writing a desktop application? You need to think about having a fluid experience, whist probably being fairly portable. Writing a web application? It needs to work across browsers and different types of clients. Each of these applications require a very different tool-set (and potentially, programming language). Also, even if you’re targeting the same type of applications, there are very different types of solutions that lie in the same application paradigm.

It’s a different approach of the same application type

If I’m honest, I could not come up with an appropriate title for this category. I’ll try to explain. Let’s consider the desktop programming side for this category. There are numerous different applications that live in the application type. These are: your typical desktop application, a background service / daemon, a 3D application, a driver, you name it! For each type of desktop application, a very different tools and skillset is required.

What can we conclude from these previous points? We can see that there are loads of different areas that as programmers, we have probably never experimented with. If you pick one of the points that I mentioned above and apply it to your weekend programming, it will be a totally new experience for you.

Where can one start? Well, it’s easy! One can apply one of the different approaches that I just mentioned above and take it to Google / YouTube! You can also experiment with other premium providers such as Pluralsight and such. These paid platforms do not come cheap, but most of their content come from very reputable people and provide excellent material to learn.

Am I the only guy who says this? No, and most of the people are sticking with this trend. An article from StackOverflow illustrate my points mentioned above, basically they checked what people are searching for during the week, and compared them to the results people are searching during the weekend. One can see that for example, SharePoint is clearly a topic that is only worked on during the weekend and Haskell is a weekend project! Check the full article here.

StackOverflowLanguages
Topics during the week vs during the weekend. Courtesy of StackOverflow

 

What’s in it for you in the end of the day? Let’s highlight some points.

Expand your professional career.

Getting stuck in the same technologies over and over again is obviously not helping you expanding your career. Your CV will never grow; it’ll just show that you’ve stuck in the same comfortable zone forever, showing that you’re probably not willing to step out of your comfort zone. On the other hand, showing experiences in vast areas show that you are never tired of learning, always up for a new challenge and you can step out of your comfort zone.

Gather new skills.

Sometimes seeing different languages, tutorials or simply different approaches to solving different tasks will enrich your mind. Even if you capture a single skill from a weekend’s worth of development, it makes you a better developer.

Gain a new outlook.

Sometimes, you’re stuck thinking that your way is the only way, or the best way to solve a task. Then, you’re following a new technique in a completely different language or paradigm and realise that there exists a totally different solution to your everyday task that you can apply.

Contribute to the community.

We’ve all used projects that have been written by the community, for the community. Have you ever contributed back If you’re stuck with the same skill-set, probably not. Learning new stuff will enable you to do just so. Plus the satisfaction of giving back the community is simply a great feeling.

Have fun!

Last, and probably the most important, is having fun! Doing something that you don’t love doing so is pointless. This is work that you may never get to use in your professional life it’s just work that needs to get your programming juices flowing and enjoying oneself learning and experimenting with new things.

On the usage of bool in method parameters

The number of times that I encountered a piece of code like the following is quite alarming:

Transaction transaction =
    TransactionFactory.CreateTransaction(
        true,
        true,
        false,
        true);

What do those true, true, false, true mean? We’ll need to view the method signature each time we need to understand what those four booleans represent!  This means that anyone trying to skim through your code will have a bad time. Sidenote: if you’re writing method calls with such parameters, you seriously need to consider re-thinking such calls.

Let’s try to improve the readability of that code by a bit

Transaction transaction =
    TransactionFactory.CreateTransaction(
        true /* postInSage */,
        true /* isPaidInFull */,
        false /* recurringTransaction */,
        true /* sendEmailReceipt */);

What did we do here? We added some comments next to each boolean so that when reading the code, the reader can quickly identify what each boolean signifies. Neat, we’ve improved the readability by a lot! Microsoft developers seem to like doing it this way; a quick look at .NET Framework Source will show you some good examples, such as here, here and here.

But, what happens in case the order of the booleans change? Apart from breaking functionality, the comments will not update to reflect the new API call. As they say, comments lie, code never does.

Instead of opting to document the parameter names with comments, C# offers the facility of naming your parameters. This means that you can choose to ignore the order of the parameters, but affix the name of the parameter before the actual value. Let’s apply it to our example.

Transaction transaction =
    TransactionFactory.CreateTransaction(
        postInSage: true,
        isPaidInFull: true,
        recurringTransaction: false,
        sendEmailReceipt: true);

That’s looking great! We can even improve a bit by defaulting all boolean arguments to false, thus we’ll only pass those booleans which should be true.

Now, the method signature will look like this:

CreateTransaction(
    bool postInSage = false,
    bool isPaidInFull = false,
    bool recurringTransaction = false,
    bool sendEmailReceipt = false) 

The method call with look like this

Transaction transaction =
    TransactionFactory.CreateTransaction(
        postInSage: true,
        isPaidInFull: true,
        sendEmailReceipt: true); 

We can also take a totally different approach and eliminate the use of boolean parameters and introduce Enums, specifically Enum flags. This means that when we call the CreateTransaction method, we’ll simply pass the required flags. In case you forgot, here’s a quick refresher on how it works.  It will look something of the sort:

Transaction transaction =
    TransactionFactory.CreateTransaction(
        TransactionFlags.PostInSage |
        TransactionFlags.IsPaidInFull |
        TransactionFlags.SendEmailReceipt);

Not bad! When you read that piece of code, you can easily identify any properties that should be taken into consideration when creating the transaction. We ended up eliminating the need of booleans, in favour of flags.

Does this means that booleans should never be used when dealing with parameters? Of course not. I just wanted to shed some light on the fact that there are better approaches than just writing and consuming APIs in a more readable fashion.

Handling uncaught exceptions in a SharePoint environment through customErrors in web.config

Handling exceptions in SharePoint environment is quite straightforward; it’s a out-of-the-box feature. Any unhandled exception is caught by the SharePoint environment and a pretty “Sorry, something went wrong” screen is shown.

Sorry, something went wrong exception
Generic error when an unhandled exception is caught

That’s all great! Things can get a bit tougher if exceptions occur outside the SharePoint context. How can this happen? In my case, it can happen when custom code is being loaded through an HttpContext, and some DLL might fail to load. In this case, there is no SharePoint context to catch the uncaught exception, thus it remains as an unhandled exception. By default, unhandled exceptions generate the Yellow Screen of Death.

Yellow Screen of Death
Yellow Screen of Death – Source

That’s not something we should be displaying, right? Of course not! First and foremost, we’re showing the actual code to the public! That’s never good. Even if we manage to turn off the details in such exceptions, it’s still terrible practice to show such screen to public.

Alright then, let’s use some custom page to show our error! This is where it gets a bit trickier. Sadly enough, we cannot safely and reliably meddle with the Global.asax file, to globally catch errors and handle them nicely. We need to thing of something else.

Whilst I was trying to tackle this issue, my obvious guess was to amend the web.config – more specifically the customError section. My first attempt at solving this was by creating a simple custom error page, putting it in my IIS folder and configuring IIS to load this page in the customError section. My web.config looked something like this:

<customErrors defaultRedirect="errorPage.html" mode="On" />

After trying that out, I was still stuck with the same Yellow Screen of Death. What was happening? It so happens that when an error is thrown, indeed a redirect to my custom page happens. But when this page tries to load, it attempts to re-load all my HttpModules and this failing and throwing a new exception all over again. This means that this approach could never work.

In order to solve this, I had to do some additional steps. The solution to this issue is to create a new Site in IIS, that is designed specifically to handle such erroneous cases. Let’s go through the steps in order to carry this out.

These steps assume that you are in possession of a functional SharePoint on-premise solution.

1) Create a friendly error page to display to the user when something goes wrong

In order to keep this as simple as possible, we’re just going to display a header that something went wrong, no fancy images or CSS. You could be a bit more creative if you want but for the sake of this blog, it’s fine.

<html>
<body>
<h1>Sorry, something went wrong</h1>
</body>
</html>

2) Create a new IIS Site to host our newly created HTML

This site will simply be used to host the newly created HTML. We can opt to create this site either on the same Application Pool or a new one; it does not really matter in this case. I’ll just follow the default configuration provided by IIS.

Adding a new site in IIS
Adding a new site in IIS

3) Map your newly created IIS site to a subdomain.

Since we still need to use the same external port, we need to make sure that our newly created IIS site is accessible from outside. We need to create bindings in IIS to make sure that this is OK. If you’re running HTTPS (please do), you’ll need a wildcard certificate and IIS 7.5 or higher.

siteBindings
Editing site bindings for our new IIS site

4) Assign permissions to Everyone on the newly created Site.

Creating the site is not enough, we need to allow Everyone to access the site. We’ll use one of the out of the box special identities provided; the “Everyone” identity. Therefore anyone will be able to access it.

Permissions
Step 1: Right click your site -> Edit Permissions

securityEdit
Step 2: In the Security tab, press Edit

addEveryone
Step 3: In this new window, hit Add. In the Object name, type “Everyone”

addReadPermission
Assign Read Permission to “Everyone”

5) Configure the customErrors tag in your SharePoint site to point to the newly created Site

All we need now is to point the customErrors tag to thew newly created site.


<customErrors defaultRedirect="https://error.mysharepointsite.com" mode="On" />

 

All done! Keep in mind that this page is used in very special circumstances: when exceptions occur and the SharePoint context has not kicked in. All other typical exceptions are to be handled by the default out of the box SharePoint behavior.

I’m a blogger, not a System Admin!

I had to make a simple choice when I started this blog: Do I want to be a blogger or a System Administrator. What does this mean? Let me elaborate.

As one might obviously assume, this is a blog powered by WordPress.org, a free tool for bloggers like me to get them online and blogging. But, opting for the free version means something else. It means that you need to download it, set up a server somewhere, install it, set it up, buy a domain, and do whatever else it’s required to get things running.

What does all the above mean? It means that you need to spend a couple of hours doing all that stuff, time which could have been spent writing a blog post, just like this one. Let’s not even mention the fact that such systems constantly need patching up. That sounds like a job for a System Administrator not a blogger. The limited time that I have between all the other things in my life, I’d rather spend it doing stuff that I actually like rather than installing package.

Enter Software as s Service (SaaS) – you pay some money per year and it’s all managed for you. This means that installation, deployment, patching and other System Administrator jobs are done for me. That sounds great, doesn’t it? But obviously, going down this route also has some drawbacks. Let’s quickly go through the pros and cons.

Pros

  1. There is almost no downtime between deciding to start blogging and actually blogging. When I purchased my subscription, everything was ready in seconds. Compared to a couple of hours do all the deployment, installations and configuration, I’m very grateful
  2. The hosts are (hopefully) resilient. Paying someone to host your blog means that you’re guaranteed that your blog will always be online without any (reasonable) downtime.
  3. Plugins / Themes are vetted – this means that any dodgy plugins are not available for me to install, thus putting blogs in risk of getting hacked.
  4. Systems are always up to date, without the interference of the end user. Keeping your system up to date is no joke, it means that you’re protected against vulnerable software. This also means that there is no headache making sure that plugins stop working after an update since this headache is handled by the host.
  5. It simply works. This might seem like an Apple advert but it’s true. No dealing with out of this world errors, just subscribe and get blogging!

Cons

  1. Such systems normally reduce the flexibility by the end user by a LOT. Since you do not have access to the underlying operating system, some custom settings simply cannot be applied.
  2. Themes and plugin installation will be limited by the ones provided by the host. But hey, that’s a good thing in the end of the day, since you’re provided peace of mind since these are vetted by the host. Hence why this is also listed in the ‘pro’ section.
  3. Last but not least, you have to pay! Granted, not a lot, but some people might be discouraged when they have to fork out money. This might be the biggest con for many people.

Anyway, in the end of day I concluded that it’s worth to fork out some money and be a bit restricted rather than have to think about every step myself. Although I’m losing some flexibility, I can spend my time writing, which is what I’d like in the end of the day.

 

I hate var and so should you!

Have you ever been writing a piece of code and ended up writing this?


var myReallyObject = GetMyReallyCoolObject();

If yes, please do a favor to everyone and remove that var. It’s not doing any favors to anyone; it’s just reducing the readability of code. If you’re not using a decent IDE, you can never know what that var actually is. In Visual Studio, you’ll need to hover on the var keyword to discover what it actually is. All it did was inconvenience to anyone who’s going to read your code.

Moreover, a var can be a ticking time-bomb, waiting to explode. How? Great Question! Let me illustrate.


private bool RandomFunc()
{
    var myRandomNumber = GetRandomNumber();
    return IsRandomNumberInRange(myRandomNumber)
}

private bool IsRandomNumberInRange(int num)
{
    return num > 10;
}

private bool IsRandomNumberInRange(double num)
{
    return num > 50;
}

private int GetRandomNumber()
{
    return 42;
}

If along the way, GetRandomNumber’s return type changed, the code will still build properly, but the functionality changes. Eliminating the use of var simply resolves this inconsistency. Obviously that’s a trivial piece of code but the rules still apply.

Of course, there are excellent uses of var. I use it when I’m trying some LINQ out and I could not be bothered to guess the return type of some really long LINQ operation, as illustrated.

Given a list of Students (Name and Mark)


List<Student> students = new List<Student> { new Student { Name = "Albert", Class = "1A" }, new Student { Name = "Herd", Class = "1A" } };

Doing an order by and group by on such operation will result in a complex object. I’d rather see


var grouped = students.OrderBy(student => student.Name).GroupBy(student => student.Class);

than see


IEnumerable<IGrouping<string, Student>> grouped = students.OrderBy(student => student.Name).GroupBy(student => student.Class);

I mean, adding that complex declared type provides no real business value; besides in this case we’re not interested in the specifics of the type.

Here’s my thoughts when you can use var:

  • When the return type is complex and we’re not really interested in the return type. Focus on the business part rather than the technical part.
  • When you initialize the variable in the same line using a new construct.
  • Anonymous Types.

Here’s my thoughts when should NOT use var:

  • When the overall readability of code is being impacted.(For me, this is the deal breaker)
  • When the variable is being initialized using a method call.
  • When having a var introduces the possibility of changes in behavior of the variable.

So please, don’t just used var for all your declared types; use it with common sense.

Can we be a bit more careful on how we use the Internal access modifier?

The other day, I was writing some SharePoint code, and I required a RunWithElevatedPrivileges call. This call is normally accompanied by the creation of a new SPSite and a new SPWeb objects. This is even demonstrated in the RunWithElevatedPrivileges MSDN excerpt, as shown below. What this code does and such, it does not really matter for the sake of this post.


SPSecurity.RunWithElevatedPrivileges(delegate()
{
    using (SPSite site = new SPSite(web.Site.ID))
    {
        // implementation details omitted
    }
});

This is all fine and good, but I’ve noticed that the project that I’m working on already contains loads of RunWithElevatedPrivileges and the accompanying creation of new SPSite and SPWeb; thus I thought that it would be great if I had access to an overload of RunWithElevatedPrivileges that provides a callback with SPSite and SPWeb as parameters rather than creating them myself. So I thought that this is probably offered by SharePoint but a quick look at the public SharePoint API shows that this does not exist.

Then I thought, how is this possible? This is a common use case; somewhere in the SharePoint API, this ought to exist. So, grabbing ILSpy, I’ve reflected the code and gave a quick look. Unsurprisingly, I’ve found the exact overload that I was looking for. Though, for some weird reason, it’s set to Internal, rather than public. Hold on a minute, why is this kind of API not public? This is not some kind of abstraction; it’s API that should be readily available for the developer.


// Microsoft.SharePoint.SPSecurity
internal static void RunWithElevatedSiteAndWeb(SPWeb originalWeb, SPSecurity.CodeToRunWithElevatedSite secureCode)
{
    if (originalWeb.CurrentUser != null && originalWeb.CurrentUser.ID == 1073741823 && !originalWeb.Site.HasAppPrincipalContext)
    {
        secureCode(originalWeb.Site, originalWeb);
        return;
    }
    SPSecurity.RunWithElevatedPrivileges(delegate
    {
        using (SPSite sPSite = new SPSite(originalWeb.Site.ID, originalWeb.Site.Zone))
        {
            using (SPWeb sPWeb = sPSite.OpenWeb(originalWeb.ID))
            {
                secureCode(sPSite, sPWeb);
            }
        }
    });
}

This made me think: can we be a bit more careful on how we use the Internal access modifier? I mean, I understand that portions of the code should be private, since such code will be only used in the same class to simplify the underlying code. But, API that are clearly useful by developers having a LOT of internal methods is a big no for me. It is clearly not adding business value to the API, just frustration to the end developer since he needs to re-implement (or copy) the same implementation in his solution.

Obviously, I am not saying that ALL Internal methods are badly designed; if this was the case, it would not exist at all. I’m saying that API developers should think twice before limiting API to internal, which can clearly be used 3rd party developers. Private methods are OK, but internal methods, I think one needs to be a bit more careful on how this is used.

Or..maybe this is just one of the many, many quirks of the SharePoint API.