Overclocking your Zen 3 / Ryzen 5000 with Precision Boost Overdrive 2 and Curve Optimizer

Ever since I have written my experience using Precision Boost Overdrive 2 and Curve optimizer in my last blog post, I have been asked several questions on how to overclock your Ryzen 5000 CPU. Let’s discuss the basics for overclocking on Ryzen 5000.

Please treat this guide as a beginner starting guide – you’ll need to spend a lot of time tweaking, especially on the curve optimizer. This is not an ultimate overclocking guide and some people might (and already did) not agree with the values and flow of this guide. Having said that, even if other approaches may be better, they will be slightly better, maybe 1-3% better, within margin of error. Following this guide WILL net you a performance gain; maybe not the BEST performance gain but a measurable one.

The following guide should work for the following CPUs:

  • Ryzen 9 5950x
  • Ryzen 9 5900x
  • Ryzen 7 5800x
  • Ryzen 5 5600x

The following should similarly work for Ryzen 3000 series, but you will not have access to the Curve Optimizer. Blame AMD for this.

Ryzen 5000 – Traditional overclocking is dead

Traditional overclocking involved going into the BIOS, typing a nice voltage and a reasonable clock speed and you are done. You can do it, and you will get a nice score in Cinebench, but you’ll lose everyday performance. Why?

By turning on a fixed max boost clock, you will be losing the higher boost clocks achieved when doing lightly threaded workloads (unless you manage to overclock to a fixed 5Ghz..if you do that, please write a guide for us!). These are the kind of workloads that you go through every day, which are the most important.  If you set a max overclock of say 4.6 GHz, you won’t be able to go over 4.6 GHz in common tasks, which will slow them down.

Ryzen’s boost algorithm is smart

On the other hand, Ryzen’s boost algorithm is designed to go past the usual clocks and boost as much as possible, given there is enough power coming in and the temperatures are in check. Trust the AMD engineers in this case. In my case, my 5900x is easily able to go past 5GHz.

The golden trio – PBO2, Power Settings and Curve Optimizer

In order to achieve an actual “overclock” on Ryzen 5000, we’ll need to dive into three major components – PBO2, Curve Optimizer and Power Settings

Precision Boost Overdrive 2

Precision Boost Overdrive (PBO for short) is when you extend the out of the box parameters that dictate performance on a Ryzen CPU – Temperature, SoC (chip) power and VRM Current (power delivery). PBO extends the maximum threshold for these components, allowing faster clock speeds to be achieved for a longer time. In short, this is AMD’s inbuilt overclocking capabilities baked into your CPU.

PBO Triangle – via https://hwcooling.net

Power Settings

Here, I’m referring to the three major power settings – PPT, TDC and EDC. PPT is the total power that the CPU can intake. TDC is the amount of amperage the CPU is fed, under sustained load (thermally and electrically limited). EDC is the amount of amperage the CPU is fed, under short bursts (electrically limited). Allowing the CPU to take more power overall allows the CPU to boost to higher clock speeds. From the PBO triangle analogy, this positively impacts the left and right vertices – SoC power and VRM Current, while negatively impacting the top vertex – heat.

Curve Optimizer

Curve optimizer allows you to undervolt your CPU. Undervolting means that you’re pushing slightly less voltage, which consumes less power and generates less heat. This, combined with Precision Boost Overdrive 2 means that you’re pushing less heat, allowing the CPU to boost clock speeds. From the PBO triangle analogy, this mostly impacts the top vertex – heat.

Striking a balance with your settings and overclocking your Ryzen 5000

Now, that we’ve established our three main players, let’s tackle them one by one. To access these settings, you’ll need to access your BIOS – these settings are typically located in Advanced -> AMD Overclocking -> Precision Boost Overdive. Here’s a sample from my ASRock x570 Steel Legend.

After discussions with my readers, people seem to be suggesting different priorities when it comes to overclocking. I believe that a modest yet stable overclock can be achieved by prioritizing these:

  1. Scalar / Max CPU Override
  2. Power Settings
  3. Curve Optimizer

Some readers believe that the best priority is:

  1. Curve Optimizer
  2. Power Settings
  3. Scalar / Max CPU Override

If you are confused like me, pick the easiest and consider following this guide. Both will provide a nice performance gain and the differences you might see from one method to another may be in 1-2% more gain, which is negligible in real life.

Precision Boost Overdrive 2

This should be the easiest, let us just follow AMD’s recommendations. Looking at their slides here –  AMD Precision Boost Overdrive 2 : Official Tech Briefing! – YouTube) we can start by looking at the setting that matter to turning on PBO.

  • Precision Boost Overdrive – Advanced
    • Allows us to turn on PBO and allows us to make manual adjustments to PBO settings
  • PBO Scalar – 10X
    • Should allow you sustain boost clocks for longer.
    • Some readers debate whether this value should actually be 1x; I cannot verify this. These readers debate that setting it to 10x will raise your overall voltage. During my brief testing, I’ve observed that this is not the case, but this statement can (and might change) with more testing
  • Max CPU Boost Clock Override – 200Mhz
    • Raises your max frequency by 200Mhz. On a 5900x, this translates to a theoretical limit of 5150Mhz, which is realistic.
    • I am told by my readers that setting a +200 boost on the Max CPU Boost Clock Override might negatively impact how much you’ll end up pushing on the Curve Optimizer. Unfortunately I’ve not neither the time or data to back up this fact.
    • PURE SPECULATION / MY THOUGHTS AHEAD (No data to back up this claim whatsoever) -By reducing the Max CPU Boost Clock Override, you’ll of course be losing the highest single core boost clock speeds, **potentially** reducing single core performance, but you’ll be able to push more multi core score, or reaching the “lower” max single core performance more regularly. These will require extensive testing separately (and probably translate into margin of error when it comes to results).

Power Settings

In their slides (link above), AMD suggest using Power Limits = Motherboard. I strongly discourage this as it may limit your power intake (this was noticed both by me and readers in my blog – My Experience with Precision Boost Overdrive 2 on a 5900X – Albert Herd, comment by Julien Galland).

For my 5900X, these are the settings that I’ve applied. If you got a 5950X, 5900x 5800x, these values may (or may not) be suitable for you. If you got a 5800X or lower, these values are too high and will hinder performance. Applying lower settings to accommodate your CPU – apply a decent bump to the values quoted below by AMD. Unfortunately, I don’t own anything else apart from a 5900X so I cannot vouch for these settings for other models.

  • If you got very good cooling (such a custom loop or strong cooling in general)
    • PPT – 185W
    • TDC – 125A
    • EDC – 170A
  • If your cooler will get too hot with these settings, try a more conservative setting. In my case, this setting hovers around 70-75C
    • PPT – 165W
    • TDC – 120A
    • EDC – 150A

You might notice that your CPU might run too “cool” or too hot. In this case, adjust your figures accordingly. In a multi core benchmark, these figures should all hit a 100%. In most workloads, its the EDC that plays a role, not TDC (since most workloads are considered as short burst). I also noticed that going too low on EDC will cause instability.

Leave SOC TDC and SOC EDC to 0, these should not impact us (I believe this mostly applies for APUs).

For completeness sake, please keep in mind AMD’s default values when making adjustments to these values:

  • Package Power Tracking (PPT): 142W 5950x, 5900x and 5800x and 88W for 5600x.
  • Thermal Design Current (TDC): 95A 5950x, 5900x and 5800x and 60A for 5600x.
  • Electrical Design Current (EDC): 140A 5950x, 5900x and 5800x and 90A for 5600x.

Curve optimizer

This is probably the most annoying one. The numbers you’re inputting here will vary significantly from one chip to another, so your mileage may vary. These are my values:

  • Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
  • Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
  • Negative 17 for the other cores.

If you want to start safe, you can apply a Negative 10 offset on all cores.

Testing this setting is extremely painful. You’ll notice that crashes will not happen under load; crashes will happen under idle conditions, where your CPU undervolts too much. Hopefully, AMD will look at this algorithm in future BIOS updates and provide more stability. In my experience, Geekbench 5 – Cross-Platform Benchmark is a great tool to stress my CPU out, it tends to crash it when the settings are not right.

Please keep in mind the note that I’ve written about the Max CPU Boost Override (under the header – Precision Boost Overdrive 2). Some users note that they prefer to keep Max CPU Boost Override lower and push for a more aggressive curve.

In my next post, we will look at how to get the best performance from your RAM, by applying specific DRAM configurations according to the RAM sticks you own. If you feel adventurous and feel like you can do it on your own:

Thanks for reading!

My Experience with Precision Boost Overdrive 2 on a 5900X

Looking for the TL;DR? These are my everyday settings:

  • PPT – 185W, TDC – 125A, EDC – 170A. To run these power settings, you’ll need a beefy cooler. If the CPU gets too hot with these power settings, try PPT- 165W, TDC – 115A, EDC – 150A
  • Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
  • Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
  • Negative 17 for the other cores.
  • These moved my multithreaded Cinebench R20 score from 8250 to around 8800-9000 (6-9% gain) and my single threaded Cinebench R20 score from 630 to 650 (3% gain).

__________________________________________________________________________________________________________________

Recently AMD announced a new algorithm for the Precision Boost Overdrive (PBO), aptly named Precision Boost Overdrive 2 (PBO2). You can read more here: AMD Ryzen™ Technology: Precision Boost 2 Performance Enhancement | AMD and here: AMD Introduces Precision Boost Overdrive 2, Boosts Single Thread Performance | Tom’s Hardware. This post is not intended to explain the technicalities of this feature, rather than how to take advantage of it.

To get started, you will need to navigate to the BIOS. Unfortunately, now you cannot use Ryzen Master to do this, but AMD claims that this will be part of Ryzen Master in their future releases. In the PBO section, you will need to adjust some settings.

Navigating to AMD Overclocking in the BIOS

My specs are as following:

  • AMD Ryzen 5900x
  • ASRock X570 Steel Legend
  • 32GB C17 Memory
  • 750w PSU
  • 240MM AIO from BeQuiet.

At first, naively, I’ve set the power limits (PPT, TDC and EDC) to 0, which means unlimited. This in turn has a negative effect. It will let the CPU get as much power as it can. This translates into unnecessary power consumption, which will limit the maximum clock speed achieved. I’d suggest sticking to values which will keep the CPU under (or close to 80C under full load).

In my case, the maximum power settings I manage to sustain are: PPT – 185W, TDC – 125A, EDC – 170A. The recommended values for your CPU will vary according to the silicon quality and the cooling provided. Cooling 185W is not an easy feat, you’ll need a good cooler (such as a good NH-D15 (noctua.at), some good AIO (I am using Pure Loop | 240mm silent essential Water coolers from be quiet!).

Setting the PPT, TDC and EDC in a well balanced value is extremely important, this will help you strike the balance between the power consumption needed by the CPU while maintaining realistic temperatures. If the CPU gets too hot with these power settings, try PPT- 165W, TDC – 115A, EDC – 150A

I have set the PBO scalar to manual and 10x. I will be honest I am not sure what impact this has, but it looks like a setting which needs tweaking. I’ve tried 1X and honestly I did not feel any difference. From what I can understand, this is the length of how much the CPU will remain pumping high voltage / clocks until it dials it down. In burst scenarios, this should not have any impact.

Max CPU Boost Clock Override should be set to 200MHZ. This allows for higher clock speeds on single threaded workloads. My 5900x can hit 5.15 GHz with this setting on a single core. 5.15 GHZ is not a one-off number. I regularly see this during light workloads

Navigating to the Curve Optimizer in BIOS

Now, for the most important part: The Curve Optimizer. For the best and second core for each CCD, I have set this to negative 10, and for the other cores I have set it to minus 15.

The next step is quite difficult to instruct, as it purely depends on your silicon quality. In my case, I found the following settings to work for me:

  • Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
  • Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
  • Negative 17 for the other cores

It took quite a lot of testing to arrive to these figures. You can find the first and second preferred cores from Ryzen Master.

Per Clock adjustments in the Curve Optimizer

Firstly, I started with negative 20 on all cores. This resulted in awesome Cinebench R20 scores but poor stability. I have then went to negative 15 on all cores. This was not bad, but I was experiencing a crash every now and then, especially when the PC is running cold and is able to push more clocks. It would run all day, but on boot, pushing it will instantly result a crash. This tells me that the algorithm was trying to push for more clocks, but the undervolting was too aggressive.

I then went to negative 10 on all cores and it is fully stable. Finally, I pushed negative 15 for those cores which are not first or second. This remained stable, and eventually I started changes the values slightly everday. Sometimes I go too much and get a WHEA BSOD (especially when the PC is cool and under light workloads).

These moved my multithreaded Cinebench R20 score from 8250 to around 8800-9000 (6-9% gain) and my single threaded Cinebench R20 score from 630 to 650 (3% gain). These are small gains, but when they are coming at you with no cost, it’s good to take advantage of it. And yes, these do not really translate to any tangible performance uplift in everyday computing.

Preferred Cores (Star is 1st, dot is second)

The performance uplift is thanks to higher sustained clocks. With PBO turned off, I was sustaining around 4.1 GHz core clock and with PBO on, I am sustaining between 4.4-4.5 GHz in Cinebench R20.

Cinebench scores with PBO2
Full load under Cinebench R20

Simpler workloads (non AVX) will clock past 4.5 GHz. I suspect that Ryzen calms down the clocks by a bit during AVX workloads, but I cannot confirm this.

Full load under a synthetic load – Memtest 64

Please let me know your experience with PBO2 and whether you find this post useful. If you got better settings than mine, I appreciate the feedback! Of course, keep in mind that as AMD said, no processor is the same; some might need more voltage than others to remain stable. It also depends on the power delivery quality, the sustained temperatures, the quality of the thermal paste, the overall case temperature and a plethora of other things, as mentioned in the first link to AMD’s site.

Outputting DS18B20 temperatures on a LCD1602 – Raspberry Pi Temperature Monitoring Part 4

This is part of a tutorial series. If you feel a bit lost, I suggest following the tutorial in order:

Just interested in the code? – https://github.com/albertherd/DS18B20Reader

This article assumes that you’ve configured one or more DS18B20 sensors to your Raspberry Pi and configured your Raspberry Pi to work with an LCD1602. If you did not, read the above mentioned links.

Okay – this should be a quick post – most of the heavy lifting is done. Remember the LCD1602 library that we’ve used in the second tutorial? We’ll be using that to simply get the temperature we’ve captured in third tutorial and display it.

We’ll need to do the following changes:

  1. Import the LCD1602 library.
  2. Initialize the LCD1602 on application startup.
  3. Read and display information on the LCD1602.
  4. Cleanup resources before exiting – this is important since we’ll need to turn off the backlight after usage.

Import the LCD1602 library

Get a copy of this repository – either by cloning or simply copying lcd1602.c and lcd1602.h files to your solution. We’ll be also adding them to our CMake file – it should look something of the sort. I’ve left the solution on debug in this case.

set(SOURCE main.c sensor.c lcd1602.c main.h sensor.h lcd1602.h)
set(CMAKE_BUILD_TYPE Debug)
add_executable(DS18B20Reader ${SOURCE})

Now it’s simply just adding a reference to the lcd1602.h in the solution. Next!

Initialize the LCD1602 on application startup

We’ll need to initialize the library and open a connection to the display under the right address – check your device address by checking the third tutorial. To simplify things, I hard-coded the value which is 0x27 in my case. Call this when initializing the application.

void InitializeLCD()
{
    int rc;
	rc = lcd1602Init(1, LCDADDRESS);
	if (rc)
	{
		printf("Initialization failed; aborting...\n");
		return;
	}
}

Read and display information on the LCD1602

We’ll be modifying our main loop to output content to the console (not important though and output to the LCD1602 screen. The main adjustment we’ve did in the main loop is that we’ve broken down our output to two functions – Outputting to console (not important) and outputting to the LCD1602 – Let’s see the main loop:

void ReadTemperatureLoop(SensorList *sensorList)
{
    while(!sigintFlag)
    {
        for(int i = 0; i SensorCount; i++)
        {
            float temperature = ReadTemperature(sensorList->Sensors[i]);
            PrintTemperatueToLCD1602(sensorList->Sensors[i], i % LCD1602LINES, temperature);
            LogTemperature(sensorList->Sensors[i], temperature);
        }       
    }
}

Let’s now have a look at the important method – PrintTemperatueToLCD1602. Keeping in mind that the LCD1602 has two lines, we’ll be receiving the calculated line number as a parameter. This will make sure that values will lie between 0 and 1 only using modulus.

We’ll also need to remember that each line will hold up to 16 characters, so we’re truncating anything more than 16 characters (actually 16 + 1 for null termination). We’ll then just pass the (potentially truncated) string to the LCD1602 and et voila!

void PrintTemperatueToLCD1602(Sensor *sensor, int lineToPrintDataOn, float temperature)
{
    char temperatureString[LCD1602CHARACTERS + 1];
    snprintf(temperatureString, LCD1602CHARACTERS + 1, "%s : %.2fC", sensor->SensorName, temperature);

    lcd1602SetCursor(0, lineToPrintDataOn);
    lcd1602WriteString(temperatureString);
}

Cleanup resources before exiting

After we’re done, it’s just a matter of cleaning up resources. As previously mentioned, this is important since we’ll need to turn off the backlight after usage. In the cleanup method, we’re just calling the lcd1602Shutdown method.

void Cleanup(SensorList *sensorList)
{
    printf("Exiting...\n");
    FreeSensors(sensorList);
    lcd1602Shutdown();
}

Fetch a complete copy of the code from from GitHub – https://github.com/albertherd/DS18B20Reader

Let’s run the application! In a terminal with git and cmake installed, run the following commands

git clone https://github.com/albertherd/DS18B20Reader
cd ./DS18B20Reader
cmake . && make && ./DS18B20Reader "Sensor"

With some luck, your LCD1602 should display something like the below. In my case I have two sensors so I’ve fired up the application using the following syntax:

./DS18B20Reader "Sensor1" "Sensor2"

LCD1602OutputCropped

Until the next one!

Using C to monitor temperatures through your DS18B20 thermal sensor – Raspberry Pi Temperature Monitoring Part 3

This is part of a tutorial series. If you feel a bit lost, I suggest following the tutorial in order:

Just interested in the code? – https://github.com/albertherd/DS18B20Reader

Since you made it here, great! Your Raspberry Pi should have one or more DS18B20 thermal sensors connected, like the image below.

sensor2.png

Now that we have our DS18B20 thermal sensor connected to our Raspberry Pi, it’s time to do some programming to read out the temperature! Our application will need to be able to the following tasks:

  1. Discover all the DS18B20 sensors (in my case, I’ve connected 2 but this application should handle an arbitrary number of sensors).
  2. Assign a friendly name so we’ll know which sensor is which and store them in a list.
  3. Retrieve and parse the information from the device.
  4. Do whatever necessary with the gathered information.

1) Discover all the DS18B20 sensors

First and foremost, our code cannot just assume that devices just exist on the system – we’ll need to go and discover these devices. Since the DS18B20 makes use of the 1-Wire protocol, devices will live under the /sys/bus/w1/devices/ directory. Therefore, our code will need to devices living under this directory, whose names start with 28 since all DS18B20 device names will start with 28. Let’s start by knowing how many devices are connected.

typedef struct Sensor
{
    char *SensorName;
    FILE *SensorFile;    
} Sensor;
 
typedef struct SensorList
{
    Sensor **Sensors;
    int SensorCount;
} SensorList;

DIR *dir;
struct dirent *dirEntry;

SensorList *sensorList = malloc(sizeof(SensorList));
sensorList->SensorCount = 0;

if(!(dir = opendir("/sys/bus/w1/devices/")))
    return sensorList;

while((dirEntry = readdir(dir)))
{        
    if(strncmp(dirEntry->d_name, "28", 2) == 0)
    {
        sensorList->SensorCount++;
    }
}

2) Assign a friendly name so we’ll know which sensor is which and store them in a list

Now that we’ve discovered the devices connected to the system, it’s time to save a reference and optionally a friendly name as well. Logic mostly applies from step 1.

sensorList->Sensors = malloc(sizeof(Sensor*) * sensorList->SensorCount);
Sensor **currentSensor = sensorList->Sensors;   

int sensorNamesAllocated = 0;
while((dirEntry = readdir(dir)))
{        
    if(strncmp(dirEntry->d_name, "28", 2) == 0)
    {   
        char *sensorName;
        if(sensorNamesCount > sensorNamesAllocated)
        {
            sensorName = strdup(*sensorNames);
            sensorNames++;
        }
        else
        {
            sensorName = strdup("Sensor");
        }

        char sensorFilePath[64];     
        sprintf(sensorFilePath, "%s%s%s",  "/sys/bus/w1/devices/", dirEntry->d_name, "/w1_slave");
        *currentSensor = GetSensor(sensorFilePath, sensorName);
        currentSensor++;
    }
}

Sensor *GetSensor(char *sensorId, char *sensorName)
{
    Sensor *sensor = malloc(sizeof(Sensor));
    sensor->SensorFile = fopen(sensorId, "r");
    sensor->SensorName = sensorName;
    return sensor;
}    

3) Read the temperature from the device

This is the most exciting part – we actually get to read the temperatures! Using the sensor information we got from steps 1 and 2, we can get the device, open it as a file, extract the readings and parse it accordingly. Using the FILE API makes it very easy to do so – grab all the contents and store it in a buffer. As mentioned in the first tutorial, the content of the file looks as follows –

0b 01 55 05 7f 7e 81 66 bf : crc=bf YES
0b 01 55 05 7f 7e 81 66 bf t=16687

We’re only intereested in the t= component, so some string manipulation and float conversion will take the 16687 and convert it into 16.687C. We’re also doing some range checking since the DS18B20 is rated between -55C and +125C

   
long deviceFileSize;
char *buffer;

FILE *deviceFile = sensor->SensorFile;
fseek(deviceFile, 0, SEEK_END);
deviceFileSize = ftell(deviceFile);
fseek(deviceFile, 0, SEEK_SET);

buffer = calloc(deviceFileSize, sizeof(char));

fread(buffer, sizeof(char), deviceFileSize, deviceFile);
char *temperatureComponent = strstr(buffer, "t=");
if(!temperatureComponent)
{
    free(buffer);
    return -1;
}

temperatureComponent +=2; //move pointer 2 spaces to compensate for t=

float temperatureFloat = atof(temperatureComponent);
temperatureFloat = temperatureFloat / 1000;

if(temperatureFloat  125)
    temperatureFloat = 125;    

free(buffer);
return temperatureFloat;

4) Do whatever necessary with the gathered information

We now have the information at hand, great! We can do many sort of things with it, such as sending an email, activating some other device or whatever is necessary. For demo purposes, we’re simply going to output the contents to the console just to see it working. This is in a loop so we’ll keep reading the temperature until the application exits.

 while(1)
    {
        for(int i = 0; i SensorCount; i++)
        {
            char dateTimeStringBuffer[32];
            strftime(dateTimeStringBuffer, 32, "%Y-%m-%d %H:%M:%S", localtime(¤tTime));

            float temperature = ReadTemperature(sensorList->Sensors[i]);
            printf("%s - %s - %.2fC\n", dateTimeStringBuffer, sensorList->Sensors[i]->SensorName, temperature);
        }       
    }

To try out the code as a whole solution, grab a copy from GitHub – https://github.com/albertherd/DS18B20Reader

In a terminal with git and cmake installed, run the following commands

git clone https://github.com/albertherd/DS18B20Reader
cd ./DS18B20Reader
cmake . && make && ./DS18B20Reader "Sensor"

If the output looks like the below, congratulations!

ds18b20 tutorial sample output.png

In the next tutorial, we’ll pick up from here and we’ll start outputting the content on an LCD1602! Until the next one.

Connecting a LCD1602 with an I2C module to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 2

The LCD1602 is a very famous LCD that can be connected to various devices such as the Raspberry Pi. The LCD1602 on its own is quite tricky to wire it up since it requires 16 pins to be connected. The LCD1602 can also be purchased with an I2C module, which reduces the amount of pins needed to just 4.

For this tutorial, we’ll be working with a LCD1602 with an I2C module. I got mine from AliExpress for around $2.50. Make sure to grab a set of jumper cables as you’ll need them to connect the LCD to the Raspberry Pi. I got mine from AliExpress as well for around $1.50.

IMG_20190102_110607.jpg

img_20190109_230900

Let’s start by wiring it up. We have 4 pins connect – GND (ground), VCC (power, 5V), SDA (data line) and SCL (clock line). GND and VCC can be connected to any equivalent GND and 5V pin. SDA and SCL should be connected to pins BCM 2 and BCM 3 accordingly.

lcd1602_i2c_raspberrypi

img_20190109_231321

If you’re following the Raspberry Pi Temperature Monitoring Part 1 and connected the DS18B20 temperature sensors, you should now have the following configuration.

lcd1602_i2c_ds18b20_raspberrypi

img_20190109_232649

Great! We’re done from the hardware’s side – let’s start configuring our Raspberry Pi to communicate with our LCD.

Firstly, let’s enable I2C from the Raspberry Pi Config. Fire up the raspi-config to get started: sudo raspi config

Now navigate to Interfacing Options => I2C => Enable I2C

raspi-config-interfacing-options

raspi-config-interfacing-options-i2c

Now that we’ve enabled I2C communication, it’s time to start development! We’ll need to get some tools before we start working though, so fire up a shell and input:

sudo apt-get install i2c-tools.

Once that’s done, the LCD is ready to be programmed! Let’s make sure that the LCD is properly connected and working. In a shell, type:

i2cdetect -y 1.

The output should be something like the below. Note the number outputted by the command; will be needed later on. We’ll need this address when we’re trying our demo code. In this case, the address is “27”.

i2cdetect

Great! Now, it’s time to test out our display and see if it works! We’ll be using a Github library – https://github.com/albertherd/LCD1602. This has been forked from https://github.com/bitbank2/LCD1602. We’ll be using my fork since the original repository has an unresolved issue with clearing the display.

After you’ve cloned the repository in your working directory, it’s time to use the address (27 in my case) obtained earlier. Open the main.c and find the call to lcd1602Init and change second parameter. This is how it looks in my case:

lcd1602Init(1, 0x27);

Now it’s time to compile and run our code. If all goes well, we should be getting some text on the screen. You can change the text to whatever you’d like by changing the following lines in main.c.

lcd1602WriteString("BitBank LCD1602");
lcd1602SetCursor(0,1);
lcd1602WriteString("ENTER to quit");

Build and run using the following commands:

make
make -f make_demo
./demo

The screen should look like the below:

IMG_20190110_231937.jpg

Great! Now we’ve successfully connected our LCD1602 to our Raspberry Pi and we’re able to output content on it!

In the next part of this tutorial series, we’ll start by capturing the temperature using the sensor in our first part of the tutorial and outputting it! Stay tuned.

Connecting a DS18B20 thermal sensor to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 1

A project that I’ve been working on during the Christmas holidays was to hook up some thermal probes to my Raspberry Pi, just to play around. This tutorial simply follows the steps that I’ve taken to achieve so.

You’ll need:

  • Raspberry Pi, any flavor as long as it has GPIO headers available. I had a Raspberry Pi 2, so I used that.
  • You’ll also need the usual suspects – USB to MicroUSB to hook it up to power, HDMI to connect it to a display for initial configuration and an ethernet port to manage it through SSH. I highly recommend configuring SSH rather than using the device itself. This tutorial assumes you’re using SSH.
  • A DS18B20 sensor – I’d suggest getting one which includes a Plugable Terminal to avoid soldering – just wire it up and you’re good to go. I got mine from AliExpress
  • Also make sure your kit has 3 jumper cables. They are typically included. Just to be sure, I also got a set of female to female jumper cables from AliExpress though I did not use them for the DS18B20 sensor.

All right, let’s wire it up! The DS18B20 sensor requires three pins – data, VCC (3.3V), and ground. Connect the wires as below. Data is yellow, VCC is red and ground is black.

IMG_20190102_105252

Connect the 3 pins using the jumper cables as shown below.
sensor1.png

IMG_20190102_105726

We’ll also need to instruct the Raspberry Pi that we’re going to connect the DS18B20 sensor. This sensor makes use of the 1-Wire protocol, so let’s activate it:

  • Connect to the Raspberry Pi using SSH
  • Let’s start by editing the config file that the Raspberry PI parses every time it boots up: sudo nano /boot/config.txt
  • Go to the end of the document and input the following. Specifying gpiopin=4 is actually optional since by convention, 1-wire devices are expected on gpiopin 4 on the Raspberry Pi.
    # Enable OneWire Protocol
    dtoverlay=w1-gpio;gpiopin=4
  • Time to reboot the Raspberry Pi sudo reboot
  • Once the Raspberry PI reboots and you re-connect using SSH, it’s time to get data from the sensor! Let’s find the 1-wire devices connected to the system. Let’s start by browsing to the appropriate directory. cd /sys/bus/w1/devices
  • Great! Let’s now see the devices attached to the Raspberry Pi. ls
  • This will get the devices attached using the 1-Wire protocol. You should have a device called 28-xxxxxxxxxxxx (where x stands for your unique 12 digit serial number). Let’s now browse the device. Mine is 28-02199245e07b, so let’s use it an example. cd 28-02199245e07b
  • Once you access the device, there should be a file called w1_slave. Let’s see the contents of the file. cat w1_slave
  • The file should look like this:
    0b 01 55 05 7f 7e 81 66 bf : crc=bf YES
    0b 01 55 05 7f 7e 81 66 bf t=16687
  • If the file looks like the above, great! The temperature component is t=16687. The temperature in this case is 16.687 °C

We also can take this to the next level and add another thermal probe! Attach it as shown below.
sensor2IMG_20190102_110134

This will require re-editing the /boot/config.txt. Let’s do it!

  • Re-open /boot/config.txt – sudo nano /boot/config.txt
  • Go to the end and add the following. I chose pin 24 because it’s easy to wire since it’s close to a 3.3v and ground. dtoverlay=w1-gpio;gpiopin=24
  • Close and save, then cd /sys/bus/w1/devices
  • You should now see two devices as 28-xxxxxxxxxxxx

Of course, at this stage we did get the temperature, but it’s not really usable. We can get access to this information programmatically – this is what we’ll be doing in the next part of this tutorial. We’ll also be eventually showing the information on a separate LCD screen! Stay tuned!

UPDATED: Intel and its flawed Kernel Memory Management Security

It has emerged that Intel CPUs made in the last decade or so are missing proper checks when it comes to securing Kernel Memory. It would seem that through special (undocumented) steps, a User-Mode application can peek and make changes to Kernel-Mode Memory. This means that any application, such as your browser, can access and change your system memory.

Some theory

In the 32-bit era, an application could typically access up to 4GB of RAM; this has been de-facto for ages. What really happened is that the application had access to 2GB of for User-Mode memory (used to typically hold the memory needed by the application to function). The other 2GB is mapped to Kernel space, containing memory locations for Kernel-Mode memory.

In the 64-bit era, these memory limitations were lifted since a 64-bit architecture can access such a larger address space (16 exabytes, to be exact). Given that the Kernel-Mode memory is so much larger (248TB), the OS can randomly place it anywhere it pleases, randomly. This randomness (Address space layout randomization) successfully makes it so much harder for foul-playing applications to find the addresses of Kernel-Mode functions.

So, what’s happening?

Typically the code that runs in User-Mode (typical code) does not have access to the Kernel-Mode memory. The reason why this is done is so when an application switches to Kernel-Mode (needed for example to open a file from disk), the Kernel-Mode memory would still be accessible, avoiding the needed to have 2 memory tables, one for User-Mode and one for Kernel-Mode. Having more than one table will mean that during every sysenter (or equivalent), tables will need to be swapped, cache needs to be freed and any overhead that such operations require.

It would seem that on Intel CPUs, hackers have found a way to bypass this security feature. This means that a User-Mode application can now access Kernel-Mode memory; which is devastating. A User-Mode application can apply small changes to the Kernel and change its functionality. Since an application has access to Kernel memory, a hacker can basically do whatever he pleases with the target’s system.

How can this be fixed?

Unfortunately, an easy fix is not available. The whole memory management logic needs to be re-written, so that instead of having just one memory table, which maps both User-Mode and Kernel-Mode memory, an additional table will hold the Kernel-Mode memory; this table will be only accessible from Kernel-Mode memory. The change is being dubbed as Kernel page-table isolation (KPTI, known as KAISER).

Adding a new memory table and switching to-and-fro has negative effects on the overall system performance, especially in I/O heavy applications. The reason is that I/O involves a lot of User-Mode and Kernel-Mode switching. Given that the new code needs to run every time the system switches from User-Mode to Kernel-Mode. performance degradation are expected. Unofficial figures quote between 5%-30% performance impact, depending on the application. OC3D has provided some benchmarks; FS-Mark (I/O benchmark) show a devastating hit in performance. PostgreSQL said that there is a best case of 17% slowdown, worst case of 23% using new new Linux patch.

Which operating systems are vulnerable?

Basically, all Operating systems are vulnerable to this hack. this is because this is a bug that goes beyond the operating system, since it lives on the CPU rather than on an operating system level. Scary! Vendors have been (secretly) informed of this issue and are working on fixing the vulnerability:

Are non-Intel CPUs vulnerable?

All we know at the moment is that AMD CPUs are NOT vulnerable. This has been confirmed by AMD themselves. In fact, Tom Lendacky from AMD has issued a fix for the Linux kernel itself, adding a check so that if the CPU is AMD, the mitigation is not applied.

What’s next? How can I stay safe?

If you got an AMD CPU, well then congratulations, you’re safe! If you’re on an Intel System, don’t panic just yet. Yes, you are vulnerable, but yes, you still control what you do with your computer. If you don’t visit dodgy websites and don’t install dodgy applications, you’ll remain safe. But that’s age-old advice.

 

 

I hate it when my laptop’s fan switches on – here’s how I solved it (Part 1)!

I’ve made it a point that I’d buy my laptop equipped with a Intel U-Based – this is to make sure that my laptop is as light, power efficient and quiet as possible. My HP Spectre X360 does all of this; well almost. It’s light (around 1.3kg), power efficient (8-10 hours of battery plus), but is not the quietest laptop on the planet.

When the laptop has a relatively moderate task to process, it ramps up the CPU to full (3.5 Ghz). That’s great, except for the fact that high clocks generate a lot of heat. When the threshold temperature is constantly exceeded (in my laptop’s case, around 50c), the fan needs to kick-in in order to manage thermals.

There’s nothing wrong with that; the laptop functions perfectly. What I’d like is to do all these tasks, whilst the laptop remains cool and will only require passive cooling. How can this be achieved? By lowering the maximum CPU Clock, of course!

What I ended up doing is setting up the maximum CPU usage to 45% (at around 1.6 Ghz), instead of 100%. This means that tasks run slightly slower, but meaning that the laptop runs way cooler. Even better, most of the time, the performance cost is not felt since the tasks do not actually max the CPU usage; thus a lower CPU clock is sufficient!

For now, I’ve solved it naively – setting up this value as a fixed value is not the most efficient. There are times that my laptop is running well below under the threshold temperature where the fan needs to kick-in. A more intelligent solution is to adjust the temperatures on the fly, so that the laptop maintains a target temperature, much like how NVIDA’s GPU Boost works.

This is very easy to set up – this can be accessed through the Windows Power Options. Here’s a step by step guide.

Power Options
1) Right click the battery icon – select Power Options

 

Change Plan Settings
2) Select your desired power plan and select Change plan settings

 

Change Advanced Power Settings
3) Select Change Advanced Power Settings

 

Max Processor State
4) Scroll down, open Processor power management, open Maximum processor state, and type your maximum value. (Eg 45%)

That’s it! Next time, we’ll see how we can do all this programmatically, through WinAPI.

Until the next one.

You don’t need more than 1080p on a 13″ screen!

Recently, I’ve been on the market to buy a new 13″ Laptop. I ended up buying a HP Spectre x360: i7, 8GB RAM, 1080p touch screen and the usual gizmos. I’ll talk about the huge headache I went through (not counting the hours spent searching reviews) in order to actually determine what I’m going to buy.

I was quite sure on what I wanted – a lightweight 13″ laptop with an i7 and 8GB of RAM and stuff like that. In other words, a really portable machine which won’t slow me down on the go. There were several contenders in this department, the Dell XPS 13, Lenovo Yoga 910, Razer Blade Stealth, the aforementioned HP Spectre x360  and some others which were quickly eliminated from the list. The biggest question was always : 1080p or 4K screen?

People had mixed feelings about this, some said go for 1080p and some said 4K. Here are my thoughts on this subject. Oh, by the way – this argument is only for Windows Based laptops. This does not apply for non-Windows based machines.

Let’s start by the biggest problem that screen size carries. If the pixel count grows and the screen does not, this means that the actual pixel size gets smaller. So, this means that a 300 pixels on a 13″ 1080p might be 4cm long, but 300 pixels on a 13″ 4k might be just 1 cm long. Most (older) applications were designed to work with pixels, so they do not cater for big resolutions on small screens.

Fortunately, Microsoft have realised this problem and provide a feature to scale the size of the display accordingly. So, old applications will scale up to the appropriate size, but this comes at a cost. Most of the time, the bigger the scale, the blurrier the window will actually look; I’ve illustrated this below. One can “clearly” see that the D is quite blurred out.

Scalingblurring

This problem is acknowledged by Microsoft themselves and provide some workarounds for this. Fortunately, as time goes on, more and more applications are being designed with this problem in mind and scale quite nicely. Also, the new UWP applications (such as the new looking applications on Windows 10 – Settings, Calculator and such handle this problem natively; they will not suffer from these problems.

In my case, my 1080P 13″ display came configured out of the box to use 150% scaling. This means that applications that do not handle such scaling will be multiplied by 1.5 times in order to scale appropriately. So the problem with scaling and blurring already exist with a 1080P display, let alone a 4K display! Those apps which scale poorly will simply exhibit worse symptoms since the scaling needs to be bigger at a 4K resolution.

This problem also exists in games; Linus played Half Life on a 16K monitor; scaling was just laughable.

My end verdict? If you’re buying a Windows-Based machine, don’t opt for a 4K on a 13″ display. It will make the scaling problem just worse. Let’s just hope for a better future where all applications scale correctly! I hope I’ll save some time and headache for anyone who is in the market for a 13″ laptop.

I have not mentioned too much technical details on what actually is going on; I do not want to confuse potential non-technical readers. This post will be followed up by a technical blog post explaining what is actually going on and as a programmer, how to program against this problem. If interested though, the problem mostly lies in the domain of DPI and DIP.