Overclocking your Zen 3 / Ryzen 5000 with Precision Boost Overdrive 2 and Curve Optimizer

Ever since I have written my experience using Precision Boost Overdrive 2 and Curve optimizer in my last blog post, I have been asked several questions on how to overclock your Ryzen 5000 CPU. Let’s discuss the basics for overclocking on Ryzen 5000.

Please treat this guide as a beginner starting guide – you’ll need to spend a lot of time tweaking, especially on the curve optimizer. This is not an ultimate overclocking guide and some people might (and already did) not agree with the values and flow of this guide. Having said that, even if other approaches may be better, they will be slightly better, maybe 1-3% better, within margin of error. Following this guide WILL net you a performance gain; maybe not the BEST performance gain but a measurable one.

The following guide should work for the following CPUs:

  • Ryzen 9 5950x
  • Ryzen 9 5900x
  • Ryzen 7 5800x
  • Ryzen 5 5600x

The following should similarly work for Ryzen 3000 series, but you will not have access to the Curve Optimizer. Blame AMD for this.

Ryzen 5000 – Traditional overclocking is dead

Traditional overclocking involved going into the BIOS, typing a nice voltage and a reasonable clock speed and you are done. You can do it, and you will get a nice score in Cinebench, but you’ll lose everyday performance. Why?

By turning on a fixed max boost clock, you will be losing the higher boost clocks achieved when doing lightly threaded workloads (unless you manage to overclock to a fixed 5Ghz..if you do that, please write a guide for us!). These are the kind of workloads that you go through every day, which are the most important.  If you set a max overclock of say 4.6 GHz, you won’t be able to go over 4.6 GHz in common tasks, which will slow them down.

Ryzen’s boost algorithm is smart

On the other hand, Ryzen’s boost algorithm is designed to go past the usual clocks and boost as much as possible, given there is enough power coming in and the temperatures are in check. Trust the AMD engineers in this case. In my case, my 5900x is easily able to go past 5GHz.

The golden trio – PBO2, Power Settings and Curve Optimizer

In order to achieve an actual “overclock” on Ryzen 5000, we’ll need to dive into three major components – PBO2, Curve Optimizer and Power Settings

Precision Boost Overdrive 2

Precision Boost Overdrive (PBO for short) is when you extend the out of the box parameters that dictate performance on a Ryzen CPU – Temperature, SoC (chip) power and VRM Current (power delivery). PBO extends the maximum threshold for these components, allowing faster clock speeds to be achieved for a longer time. In short, this is AMD’s inbuilt overclocking capabilities baked into your CPU.

PBO Triangle – via https://hwcooling.net

Power Settings

Here, I’m referring to the three major power settings – PPT, TDC and EDC. PPT is the total power that the CPU can intake. TDC is the amount of amperage the CPU is fed, under sustained load (thermally and electrically limited). EDC is the amount of amperage the CPU is fed, under short bursts (electrically limited). Allowing the CPU to take more power overall allows the CPU to boost to higher clock speeds. From the PBO triangle analogy, this positively impacts the left and right vertices – SoC power and VRM Current, while negatively impacting the top vertex – heat.

Curve Optimizer

Curve optimizer allows you to undervolt your CPU. Undervolting means that you’re pushing slightly less voltage, which consumes less power and generates less heat. This, combined with Precision Boost Overdrive 2 means that you’re pushing less heat, allowing the CPU to boost clock speeds. From the PBO triangle analogy, this mostly impacts the top vertex – heat.

Striking a balance with your settings and overclocking your Ryzen 5000

Now, that we’ve established our three main players, let’s tackle them one by one. To access these settings, you’ll need to access your BIOS – these settings are typically located in Advanced -> AMD Overclocking -> Precision Boost Overdive. Here’s a sample from my ASRock x570 Steel Legend.

After discussions with my readers, people seem to be suggesting different priorities when it comes to overclocking. I believe that a modest yet stable overclock can be achieved by prioritizing these:

  1. Scalar / Max CPU Override
  2. Power Settings
  3. Curve Optimizer

Some readers believe that the best priority is:

  1. Curve Optimizer
  2. Power Settings
  3. Scalar / Max CPU Override

If you are confused like me, pick the easiest and consider following this guide. Both will provide a nice performance gain and the differences you might see from one method to another may be in 1-2% more gain, which is negligible in real life.

Precision Boost Overdrive 2

This should be the easiest, let us just follow AMD’s recommendations. Looking at their slides here –  AMD Precision Boost Overdrive 2 : Official Tech Briefing! – YouTube) we can start by looking at the setting that matter to turning on PBO.

  • Precision Boost Overdrive – Advanced
    • Allows us to turn on PBO and allows us to make manual adjustments to PBO settings
  • PBO Scalar – 10X
    • Should allow you sustain boost clocks for longer.
    • Some readers debate whether this value should actually be 1x; I cannot verify this. These readers debate that setting it to 10x will raise your overall voltage. During my brief testing, I’ve observed that this is not the case, but this statement can (and might change) with more testing
  • Max CPU Boost Clock Override – 200Mhz
    • Raises your max frequency by 200Mhz. On a 5900x, this translates to a theoretical limit of 5150Mhz, which is realistic.
    • I am told by my readers that setting a +200 boost on the Max CPU Boost Clock Override might negatively impact how much you’ll end up pushing on the Curve Optimizer. Unfortunately I’ve not neither the time or data to back up this fact.
    • PURE SPECULATION / MY THOUGHTS AHEAD (No data to back up this claim whatsoever) -By reducing the Max CPU Boost Clock Override, you’ll of course be losing the highest single core boost clock speeds, **potentially** reducing single core performance, but you’ll be able to push more multi core score, or reaching the “lower” max single core performance more regularly. These will require extensive testing separately (and probably translate into margin of error when it comes to results).

Power Settings

In their slides (link above), AMD suggest using Power Limits = Motherboard. I strongly discourage this as it may limit your power intake (this was noticed both by me and readers in my blog – My Experience with Precision Boost Overdrive 2 on a 5900X – Albert Herd, comment by Julien Galland).

For my 5900X, these are the settings that I’ve applied. If you got a 5950X, 5900x 5800x, these values may (or may not) be suitable for you. If you got a 5800X or lower, these values are too high and will hinder performance. Applying lower settings to accommodate your CPU – apply a decent bump to the values quoted below by AMD. Unfortunately, I don’t own anything else apart from a 5900X so I cannot vouch for these settings for other models.

  • If you got very good cooling (such a custom loop or strong cooling in general)
    • PPT – 185W
    • TDC – 125A
    • EDC – 170A
  • If your cooler will get too hot with these settings, try a more conservative setting. In my case, this setting hovers around 70-75C
    • PPT – 165W
    • TDC – 120A
    • EDC – 150A

You might notice that your CPU might run too “cool” or too hot. In this case, adjust your figures accordingly. In a multi core benchmark, these figures should all hit a 100%. In most workloads, its the EDC that plays a role, not TDC (since most workloads are considered as short burst). I also noticed that going too low on EDC will cause instability.

Leave SOC TDC and SOC EDC to 0, these should not impact us (I believe this mostly applies for APUs).

For completeness sake, please keep in mind AMD’s default values when making adjustments to these values:

  • Package Power Tracking (PPT): 142W 5950x, 5900x and 5800x and 88W for 5600x.
  • Thermal Design Current (TDC): 95A 5950x, 5900x and 5800x and 60A for 5600x.
  • Electrical Design Current (EDC): 140A 5950x, 5900x and 5800x and 90A for 5600x.

Curve optimizer

This is probably the most annoying one. The numbers you’re inputting here will vary significantly from one chip to another, so your mileage may vary. These are my values:

  • Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
  • Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
  • Negative 17 for the other cores.

If you want to start safe, you can apply a Negative 10 offset on all cores.

Testing this setting is extremely painful. You’ll notice that crashes will not happen under load; crashes will happen under idle conditions, where your CPU undervolts too much. Hopefully, AMD will look at this algorithm in future BIOS updates and provide more stability. In my experience, Geekbench 5 – Cross-Platform Benchmark is a great tool to stress my CPU out, it tends to crash it when the settings are not right.

Please keep in mind the note that I’ve written about the Max CPU Boost Override (under the header – Precision Boost Overdrive 2). Some users note that they prefer to keep Max CPU Boost Override lower and push for a more aggressive curve.

In my next post, we will look at how to get the best performance from your RAM, by applying specific DRAM configurations according to the RAM sticks you own. If you feel adventurous and feel like you can do it on your own:

Thanks for reading!

My Experience with Precision Boost Overdrive 2 on a 5900X

Looking for the TL;DR? These are my everyday settings:

  • PPT – 185W, TDC – 125A, EDC – 170A. To run these power settings, you’ll need a beefy cooler. If the CPU gets too hot with these power settings, try PPT- 165W, TDC – 115A, EDC – 150A
  • Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
  • Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
  • Negative 17 for the other cores.
  • These moved my multithreaded Cinebench R20 score from 8250 to around 8800-9000 (6-9% gain) and my single threaded Cinebench R20 score from 630 to 650 (3% gain).

__________________________________________________________________________________________________________________

Recently AMD announced a new algorithm for the Precision Boost Overdrive (PBO), aptly named Precision Boost Overdrive 2 (PBO2). You can read more here: AMD Ryzen™ Technology: Precision Boost 2 Performance Enhancement | AMD and here: AMD Introduces Precision Boost Overdrive 2, Boosts Single Thread Performance | Tom’s Hardware. This post is not intended to explain the technicalities of this feature, rather than how to take advantage of it.

To get started, you will need to navigate to the BIOS. Unfortunately, now you cannot use Ryzen Master to do this, but AMD claims that this will be part of Ryzen Master in their future releases. In the PBO section, you will need to adjust some settings.

Navigating to AMD Overclocking in the BIOS

My specs are as following:

  • AMD Ryzen 5900x
  • ASRock X570 Steel Legend
  • 32GB C17 Memory
  • 750w PSU
  • 240MM AIO from BeQuiet.

At first, naively, I’ve set the power limits (PPT, TDC and EDC) to 0, which means unlimited. This in turn has a negative effect. It will let the CPU get as much power as it can. This translates into unnecessary power consumption, which will limit the maximum clock speed achieved. I’d suggest sticking to values which will keep the CPU under (or close to 80C under full load).

In my case, the maximum power settings I manage to sustain are: PPT – 185W, TDC – 125A, EDC – 170A. The recommended values for your CPU will vary according to the silicon quality and the cooling provided. Cooling 185W is not an easy feat, you’ll need a good cooler (such as a good NH-D15 (noctua.at), some good AIO (I am using Pure Loop | 240mm silent essential Water coolers from be quiet!).

Setting the PPT, TDC and EDC in a well balanced value is extremely important, this will help you strike the balance between the power consumption needed by the CPU while maintaining realistic temperatures. If the CPU gets too hot with these power settings, try PPT- 165W, TDC – 115A, EDC – 150A

I have set the PBO scalar to manual and 10x. I will be honest I am not sure what impact this has, but it looks like a setting which needs tweaking. I’ve tried 1X and honestly I did not feel any difference. From what I can understand, this is the length of how much the CPU will remain pumping high voltage / clocks until it dials it down. In burst scenarios, this should not have any impact.

Max CPU Boost Clock Override should be set to 200MHZ. This allows for higher clock speeds on single threaded workloads. My 5900x can hit 5.15 GHz with this setting on a single core. 5.15 GHZ is not a one-off number. I regularly see this during light workloads

Navigating to the Curve Optimizer in BIOS

Now, for the most important part: The Curve Optimizer. For the best and second core for each CCD, I have set this to negative 10, and for the other cores I have set it to minus 15.

The next step is quite difficult to instruct, as it purely depends on your silicon quality. In my case, I found the following settings to work for me:

  • Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
  • Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
  • Negative 17 for the other cores

It took quite a lot of testing to arrive to these figures. You can find the first and second preferred cores from Ryzen Master.

Per Clock adjustments in the Curve Optimizer

Firstly, I started with negative 20 on all cores. This resulted in awesome Cinebench R20 scores but poor stability. I have then went to negative 15 on all cores. This was not bad, but I was experiencing a crash every now and then, especially when the PC is running cold and is able to push more clocks. It would run all day, but on boot, pushing it will instantly result a crash. This tells me that the algorithm was trying to push for more clocks, but the undervolting was too aggressive.

I then went to negative 10 on all cores and it is fully stable. Finally, I pushed negative 15 for those cores which are not first or second. This remained stable, and eventually I started changes the values slightly everday. Sometimes I go too much and get a WHEA BSOD (especially when the PC is cool and under light workloads).

These moved my multithreaded Cinebench R20 score from 8250 to around 8800-9000 (6-9% gain) and my single threaded Cinebench R20 score from 630 to 650 (3% gain). These are small gains, but when they are coming at you with no cost, it’s good to take advantage of it. And yes, these do not really translate to any tangible performance uplift in everyday computing.

Preferred Cores (Star is 1st, dot is second)

The performance uplift is thanks to higher sustained clocks. With PBO turned off, I was sustaining around 4.1 GHz core clock and with PBO on, I am sustaining between 4.4-4.5 GHz in Cinebench R20.

Cinebench scores with PBO2
Full load under Cinebench R20

Simpler workloads (non AVX) will clock past 4.5 GHz. I suspect that Ryzen calms down the clocks by a bit during AVX workloads, but I cannot confirm this.

Full load under a synthetic load – Memtest 64

Please let me know your experience with PBO2 and whether you find this post useful. If you got better settings than mine, I appreciate the feedback! Of course, keep in mind that as AMD said, no processor is the same; some might need more voltage than others to remain stable. It also depends on the power delivery quality, the sustained temperatures, the quality of the thermal paste, the overall case temperature and a plethora of other things, as mentioned in the first link to AMD’s site.

Automating bank cheque analysis by using Microsoft Cognitive Services

Just looking for code? https://github.com/albertherd/ChequeAnalyser

Microsoft Cognitive Services is a rich set of AI services, such as Computer Vision, Speech Recognition, Decision making and NLP. The great thing about these tools is that you don’t really have to be an AI expert to make use of these tools, as these models come pre-trained and production ready. You’ll just feed it your information and let the framework work for you.

We’ll be looking at one area of Microsoft’s Cognitive Services – Computer vision. More specifically, we’ll be looking at the handwriting API – you’ll provide the handwriting and the system will provide you with the actual text. We’ve already worked with the Computer Vision API from Microsoft Cognitive services – we used this API to tag our photo album.

Let’s look at today’s scenario – we’re a fictitious bank which processes bank cheques. These cheques come hand-written from our clients, which contain instructions on how to transfer money from one account to the holder’s account.

A cheque typically has the following information:

  • Issue Date
  • Payee
  • Amount (in digits)
  • Amount (in words, for cross reference)
  • Payer’s account number
  • (Other information, which was omitted for this proof of concept

This is how our fictitious cheque looks like.

Fictitious Cheque
Ficticious cheque – credit: https://www.dreamstime.com/royalty-free-stock-image-blank-check-false-numbers-image7426056

This is how our fictitious cheque looks when we’re looking at the regions we’re interested in, represented in bounding boxes.

0_analysis_template
Cheque Template with bounding boxes representing areas of interest

Let’s consider these three handwritten cheques.

The attached application does the following analysis:

  • Import these cheques as images.
  • Send the images over to Microsoft Cognitive Services
  • Extract all the handwriting / text found in the image
  • Consider only those text which we’re interested in (as represented with bounding boxes previously)
  • Forward this extracted information to whatever system needed. In our case, we’re just printing them to screen.

The below is the resultant information derived from the sample cheques.

ChequeAnalyserResult
Results of Cheque Analysis

Most of the heavy lifting is done by the Microsoft Cognitive Services, making these AI tools available to the masses. Of course, with a bit more business logic, the information that can be extracted from these tools can be greatly improved, making them production ready.

As with the previous example, this example uses the TPL Dataflow library, which is an excellent tool for Actor-Based multithreaded applications.

If you want to try this yourself, you’ll need:

Until the next one!

How to avoid getting your credit card details stolen online (some practical ways)

Let’s immediately jump through the points, then some back-story.

  • Only submit credit card details to make a purchase on shops which are super famous, such as Amazon, eBay, official product merchants
  • Use 3rd party paying capabilities where possible, such as PayPal. With this method, the merchant never has access to your credit card details, only to the authorized funds.
  • Ideally use a debit card rather than a credit card online. In case your card gets stolen, the thief can only use the available funds, without ending up in debt.
  • Also, only transfer the money your’re going to use when making the purchase. The card should empty in general.
  • Use applications such as Revolut which have capabilities to enable / disable online transactions on demand.
  • Revolut also allows for disposable credit cards – the number changes every time you make a transaction. This means that even if your credit card is stolen, the card is now dead and worthless. You’ll need to pay a monthly fee, though.
  • Avoid saving credit card details on your browser. Although usually CVV is usually not saved, writing it down again won’t take long. This avoids the possibility of a virus sniffing down your credit card details if they are saved.

Why am I writing these? I’ve just stumbled on a Cyber Security Episode done by MITA / Maltese Police (great initiative, by the way). Although the content made sense, I felt that some practical points were missing (original video here). The premise of the video is to only buy from sites using HTTPS as it’s secure and you’ll know the seller. Some points on the premise:

  • Running HTTPS ONLY GUARANTEES that the transmission between you and the merchant is secure. It does NOT mean you truly know who’s responsible as a merchant. There are ways to “try” and fix this (through EV certificates) but EVs are probably dead as well.
  • What happens after your credit cards are securely transported is unknown. The following may occur:
    1. Merchant might be an outright scammer running a merchant site with an SSL certificate (which nowadays, can be obtained for free, https://letsencrypt.org/)
    2. Merchant might store your credit card details insecurely; he may end up getting hacked and credit card details will get stolen.

 

Store your MySQL Docker database info in a Docker Volume

If you spin a MySQL Docker container, you’ll notice that once the container is stopped, all the information is lost! In order not to lose any information from your MySQL Docker database, a volume will need to be attached to the container. Let’s do that!

Let’s create a new volume – this will be used to store all your database informaton

docker volume create mysql

Once the volume is created, it can be attached to a newly spun MySQL container

docker run --name mysql -e MYSQL_ROOT_PASSWORD=albert --mount source=mysql,target=/var/lib/mysql -d mysql

Any datatabses created on this MySQL instance is now preserved! Let’s test it out. Let’s connect to the container and create a new database:

docker exec -it mysql mysql -uroot -p

Supply your password (in this example, the password would be the value that we’ve supplied for the MYSQL_ROOT_PASSWORD – albert

Once you’re connected, let’s see the current databases. The default installation will have 4 default databases

SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+

Let’s go ahead and create a new database!

CREATE DATABASE test;

If you get all the databases now, you should get the following

SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+

De-attach from MySQL – type exit. Let’s now remove the current container and re-create a new container; re-attach the previously created volume.

docker container rm mysql -f
docker run --name mysql -e MYSQL_ROOT_PASSWORD=albert --mount source=mysql,target=/var/lib/mysql -d mysql

Let’s re-attach and get the list of databases (don’t forget to supply the password):

docker exec -it mysql mysql -uroot -p
SHOW DATABASES;

The output should now read as follows – the database test still exists, even after deleting and re-creating the container

+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+

How to get your first Docker container up and running on Ubuntu in 2 minutes or less

Looking to install Docker and to play around a bit? If you follow the main guide on the Official Docker site, you’ll be surprised how many steps need to be taken to get Docker Installed.

Thankfully, if you keep scrolling, you’ll notice that there is a convenience script! That’s great! Let’s get Docker installed!

Make sure you got cURL installed:

sudo apt-get install curl

Then let’s use the convenience script to install Docker!

sudo curl -sSL https://get.docker.com/ | sh

Once the installation is complete, let’s get your first container running!

sudo docker run hello-world

This should show something along the lines of:

Hello from Docker!
This message shows that your installation appears to be working correctly

Then it means that your installation is now complete!

Tag your photo album using Microsoft Cognitive Services

Just interested in the source code? – https://github.com/albertherd/PhotoTagging

Computer vision is one of the key areas that has seen huge growth in both capability and popularity. Though it seems that it’s still out of reach to many; I honestly felt lost when I was trying to play around in this field. It feels like we’re trying to solve problems which have already been solved by other companies. It seems Microsoft shares this vision though, as they’ve introduced Machine Learning features in the form of SaaS.

I’ve stumbled upon Microsoft Cognitive Services through a presentation and I was genuinely amazed. What’s amazing isn’t the results that this service yields – I’ve expected nothing less than excellent results from such tools. What amazed me is how EASY to get involved – there is no fiddling with following pages and pages of guides just to download, install and play around with some software.

Microsoft Cognitive Services enables you to do a huge array of Machine-Learning powered applications, ranging from vision, decision making, natural language processing and other areas. Let’s play around vision – can we use Microsoft’s Cognitive Vision Services and help us organise our photo library?

The idea is that I have many photos, with subjects ranging from food, vacation, family, friends and whatnot. What if my photos contain the proper EXIF tags such as subject and tags? This will allow me to classify my photos by subject and allow me to search through them. What if I can find my photos instantly instead of sifting manually through thousands of photos? I’ll presume that it’s not just me though, everyone has a smartphone nowadays, so this is everyone’s pain.

Great – now we have an objective! Let’s make the tools work for us now. The process will be simple – upload a photo to Microsoft’s Cognitive Vision Services, get the tags and a nice description and slap it to the actual file. Oh, when I say EXIF tags, these can be viewed in File Explorer like below. (Windows 10 Dark Theme in File Explorer here)

ExifInWindowsExplorer

Ready to tag your photo library? Let’s go!

Get a Microsoft Cognitive Services Account

Since this is an online service, you’ll need to have an active account with Microsoft Azure. Get your free account from here. Don’t worry, the free service is more than enough to get you playing around. I’ve used the free tier to develop, test and write this blog and I still have plenty of free capacity left.

Create a new Cognitive Services Resource and get the API key

Now that you have an active Azure account, navigate to the Azure Portal and create a new Cognitive Services Resource. Follow the wizard and get the service created – choose whatever region works best for you. I’ve chosen West Europe and the free tier in my case. Once it’s created, we’ll need two things – the URL to our endpoint and our API Key. From the quick start page, get API endpoint and the API Keys.

Get your photos tagged!

Okay, we got all the resources needed, it’s time to get some work done! I’ve created an application to get a photo, upload it our new Cognitive Services resource, get tags and description and apply it to our photo.
Follow these steps to get your photo tagging game going!

  1. Download / Clone my application from GitHub
  2. Open the application and navigate to PhotoAnalyser.cs. Change the subscriptionKey and uriBase to the ones you got previously. The keys in the solution are placeholder keys only.
  3. Run the application – have your photo directory ready as this is asked for at runtime.
  4. Let it do its magic!

In the below example, photo analysis tells us that it’s a pizza on a plate and it also gave us some appropriate tags. Try downloading and viewing the pizza photo -tags and title are preserved as EXIF data.

Keep in mind that the code in the provided solution is not production ready – it’s merely meant as a playground.

Explore the solution

What’s the fun of having a piece of software working without knowing how it works underlying? Here are some points about the application, in no particular order:

  1. It’s making one of the excellent TPL Dataflow framework from Microsoft – this enables the application to scale with ease and to work around the pesky throttling that the free tier carries with it.
  2. It is resizing the images since they don’t need to be large, plus this speed the process up.
  3. It’s using the ImageSharp to resize and add Exif tags to the images.
  4. Given that this application is manipulating images, it is memory intensive. I’ve seen this image hit close to 4GB in memory usage.
  5. It’s split into a library and a consumer just in case.

Continue exploring the Microsoft Cognitive Services stack

Computer vision is just one of the areas in the Microsoft Cognitive Services stack; there are other excellent services to enrich your applications. They also have excellent documentation on this; I’ve followed this to build my application.

That’s it for today! This was an extremely fun project to learn and experiment with new technologies! Until the next one.

Outputting DS18B20 temperatures on a LCD1602 – Raspberry Pi Temperature Monitoring Part 4

This is part of a tutorial series. If you feel a bit lost, I suggest following the tutorial in order:

Just interested in the code? – https://github.com/albertherd/DS18B20Reader

This article assumes that you’ve configured one or more DS18B20 sensors to your Raspberry Pi and configured your Raspberry Pi to work with an LCD1602. If you did not, read the above mentioned links.

Okay – this should be a quick post – most of the heavy lifting is done. Remember the LCD1602 library that we’ve used in the second tutorial? We’ll be using that to simply get the temperature we’ve captured in third tutorial and display it.

We’ll need to do the following changes:

  1. Import the LCD1602 library.
  2. Initialize the LCD1602 on application startup.
  3. Read and display information on the LCD1602.
  4. Cleanup resources before exiting – this is important since we’ll need to turn off the backlight after usage.

Import the LCD1602 library

Get a copy of this repository – either by cloning or simply copying lcd1602.c and lcd1602.h files to your solution. We’ll be also adding them to our CMake file – it should look something of the sort. I’ve left the solution on debug in this case.

set(SOURCE main.c sensor.c lcd1602.c main.h sensor.h lcd1602.h)
set(CMAKE_BUILD_TYPE Debug)
add_executable(DS18B20Reader ${SOURCE})

Now it’s simply just adding a reference to the lcd1602.h in the solution. Next!

Initialize the LCD1602 on application startup

We’ll need to initialize the library and open a connection to the display under the right address – check your device address by checking the third tutorial. To simplify things, I hard-coded the value which is 0x27 in my case. Call this when initializing the application.

void InitializeLCD()
{
    int rc;
	rc = lcd1602Init(1, LCDADDRESS);
	if (rc)
	{
		printf("Initialization failed; aborting...\n");
		return;
	}
}

Read and display information on the LCD1602

We’ll be modifying our main loop to output content to the console (not important though and output to the LCD1602 screen. The main adjustment we’ve did in the main loop is that we’ve broken down our output to two functions – Outputting to console (not important) and outputting to the LCD1602 – Let’s see the main loop:

void ReadTemperatureLoop(SensorList *sensorList)
{
    while(!sigintFlag)
    {
        for(int i = 0; i SensorCount; i++)
        {
            float temperature = ReadTemperature(sensorList->Sensors[i]);
            PrintTemperatueToLCD1602(sensorList->Sensors[i], i % LCD1602LINES, temperature);
            LogTemperature(sensorList->Sensors[i], temperature);
        }       
    }
}

Let’s now have a look at the important method – PrintTemperatueToLCD1602. Keeping in mind that the LCD1602 has two lines, we’ll be receiving the calculated line number as a parameter. This will make sure that values will lie between 0 and 1 only using modulus.

We’ll also need to remember that each line will hold up to 16 characters, so we’re truncating anything more than 16 characters (actually 16 + 1 for null termination). We’ll then just pass the (potentially truncated) string to the LCD1602 and et voila!

void PrintTemperatueToLCD1602(Sensor *sensor, int lineToPrintDataOn, float temperature)
{
    char temperatureString[LCD1602CHARACTERS + 1];
    snprintf(temperatureString, LCD1602CHARACTERS + 1, "%s : %.2fC", sensor->SensorName, temperature);

    lcd1602SetCursor(0, lineToPrintDataOn);
    lcd1602WriteString(temperatureString);
}

Cleanup resources before exiting

After we’re done, it’s just a matter of cleaning up resources. As previously mentioned, this is important since we’ll need to turn off the backlight after usage. In the cleanup method, we’re just calling the lcd1602Shutdown method.

void Cleanup(SensorList *sensorList)
{
    printf("Exiting...\n");
    FreeSensors(sensorList);
    lcd1602Shutdown();
}

Fetch a complete copy of the code from from GitHub – https://github.com/albertherd/DS18B20Reader

Let’s run the application! In a terminal with git and cmake installed, run the following commands

git clone https://github.com/albertherd/DS18B20Reader
cd ./DS18B20Reader
cmake . && make && ./DS18B20Reader "Sensor"

With some luck, your LCD1602 should display something like the below. In my case I have two sensors so I’ve fired up the application using the following syntax:

./DS18B20Reader "Sensor1" "Sensor2"

LCD1602OutputCropped

Until the next one!

Using C to monitor temperatures through your DS18B20 thermal sensor – Raspberry Pi Temperature Monitoring Part 3

This is part of a tutorial series. If you feel a bit lost, I suggest following the tutorial in order:

Just interested in the code? – https://github.com/albertherd/DS18B20Reader

Since you made it here, great! Your Raspberry Pi should have one or more DS18B20 thermal sensors connected, like the image below.

sensor2.png

Now that we have our DS18B20 thermal sensor connected to our Raspberry Pi, it’s time to do some programming to read out the temperature! Our application will need to be able to the following tasks:

  1. Discover all the DS18B20 sensors (in my case, I’ve connected 2 but this application should handle an arbitrary number of sensors).
  2. Assign a friendly name so we’ll know which sensor is which and store them in a list.
  3. Retrieve and parse the information from the device.
  4. Do whatever necessary with the gathered information.

1) Discover all the DS18B20 sensors

First and foremost, our code cannot just assume that devices just exist on the system – we’ll need to go and discover these devices. Since the DS18B20 makes use of the 1-Wire protocol, devices will live under the /sys/bus/w1/devices/ directory. Therefore, our code will need to devices living under this directory, whose names start with 28 since all DS18B20 device names will start with 28. Let’s start by knowing how many devices are connected.

typedef struct Sensor
{
    char *SensorName;
    FILE *SensorFile;    
} Sensor;
 
typedef struct SensorList
{
    Sensor **Sensors;
    int SensorCount;
} SensorList;

DIR *dir;
struct dirent *dirEntry;

SensorList *sensorList = malloc(sizeof(SensorList));
sensorList->SensorCount = 0;

if(!(dir = opendir("/sys/bus/w1/devices/")))
    return sensorList;

while((dirEntry = readdir(dir)))
{        
    if(strncmp(dirEntry->d_name, "28", 2) == 0)
    {
        sensorList->SensorCount++;
    }
}

2) Assign a friendly name so we’ll know which sensor is which and store them in a list

Now that we’ve discovered the devices connected to the system, it’s time to save a reference and optionally a friendly name as well. Logic mostly applies from step 1.

sensorList->Sensors = malloc(sizeof(Sensor*) * sensorList->SensorCount);
Sensor **currentSensor = sensorList->Sensors;   

int sensorNamesAllocated = 0;
while((dirEntry = readdir(dir)))
{        
    if(strncmp(dirEntry->d_name, "28", 2) == 0)
    {   
        char *sensorName;
        if(sensorNamesCount > sensorNamesAllocated)
        {
            sensorName = strdup(*sensorNames);
            sensorNames++;
        }
        else
        {
            sensorName = strdup("Sensor");
        }

        char sensorFilePath[64];     
        sprintf(sensorFilePath, "%s%s%s",  "/sys/bus/w1/devices/", dirEntry->d_name, "/w1_slave");
        *currentSensor = GetSensor(sensorFilePath, sensorName);
        currentSensor++;
    }
}

Sensor *GetSensor(char *sensorId, char *sensorName)
{
    Sensor *sensor = malloc(sizeof(Sensor));
    sensor->SensorFile = fopen(sensorId, "r");
    sensor->SensorName = sensorName;
    return sensor;
}    

3) Read the temperature from the device

This is the most exciting part – we actually get to read the temperatures! Using the sensor information we got from steps 1 and 2, we can get the device, open it as a file, extract the readings and parse it accordingly. Using the FILE API makes it very easy to do so – grab all the contents and store it in a buffer. As mentioned in the first tutorial, the content of the file looks as follows –

0b 01 55 05 7f 7e 81 66 bf : crc=bf YES
0b 01 55 05 7f 7e 81 66 bf t=16687

We’re only intereested in the t= component, so some string manipulation and float conversion will take the 16687 and convert it into 16.687C. We’re also doing some range checking since the DS18B20 is rated between -55C and +125C

   
long deviceFileSize;
char *buffer;

FILE *deviceFile = sensor->SensorFile;
fseek(deviceFile, 0, SEEK_END);
deviceFileSize = ftell(deviceFile);
fseek(deviceFile, 0, SEEK_SET);

buffer = calloc(deviceFileSize, sizeof(char));

fread(buffer, sizeof(char), deviceFileSize, deviceFile);
char *temperatureComponent = strstr(buffer, "t=");
if(!temperatureComponent)
{
    free(buffer);
    return -1;
}

temperatureComponent +=2; //move pointer 2 spaces to compensate for t=

float temperatureFloat = atof(temperatureComponent);
temperatureFloat = temperatureFloat / 1000;

if(temperatureFloat  125)
    temperatureFloat = 125;    

free(buffer);
return temperatureFloat;

4) Do whatever necessary with the gathered information

We now have the information at hand, great! We can do many sort of things with it, such as sending an email, activating some other device or whatever is necessary. For demo purposes, we’re simply going to output the contents to the console just to see it working. This is in a loop so we’ll keep reading the temperature until the application exits.

 while(1)
    {
        for(int i = 0; i SensorCount; i++)
        {
            char dateTimeStringBuffer[32];
            strftime(dateTimeStringBuffer, 32, "%Y-%m-%d %H:%M:%S", localtime(¤tTime));

            float temperature = ReadTemperature(sensorList->Sensors[i]);
            printf("%s - %s - %.2fC\n", dateTimeStringBuffer, sensorList->Sensors[i]->SensorName, temperature);
        }       
    }

To try out the code as a whole solution, grab a copy from GitHub – https://github.com/albertherd/DS18B20Reader

In a terminal with git and cmake installed, run the following commands

git clone https://github.com/albertherd/DS18B20Reader
cd ./DS18B20Reader
cmake . && make && ./DS18B20Reader "Sensor"

If the output looks like the below, congratulations!

ds18b20 tutorial sample output.png

In the next tutorial, we’ll pick up from here and we’ll start outputting the content on an LCD1602! Until the next one.

Connecting a LCD1602 with an I2C module to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 2

The LCD1602 is a very famous LCD that can be connected to various devices such as the Raspberry Pi. The LCD1602 on its own is quite tricky to wire it up since it requires 16 pins to be connected. The LCD1602 can also be purchased with an I2C module, which reduces the amount of pins needed to just 4.

For this tutorial, we’ll be working with a LCD1602 with an I2C module. I got mine from AliExpress for around $2.50. Make sure to grab a set of jumper cables as you’ll need them to connect the LCD to the Raspberry Pi. I got mine from AliExpress as well for around $1.50.

IMG_20190102_110607.jpg

img_20190109_230900

Let’s start by wiring it up. We have 4 pins connect – GND (ground), VCC (power, 5V), SDA (data line) and SCL (clock line). GND and VCC can be connected to any equivalent GND and 5V pin. SDA and SCL should be connected to pins BCM 2 and BCM 3 accordingly.

lcd1602_i2c_raspberrypi

img_20190109_231321

If you’re following the Raspberry Pi Temperature Monitoring Part 1 and connected the DS18B20 temperature sensors, you should now have the following configuration.

lcd1602_i2c_ds18b20_raspberrypi

img_20190109_232649

Great! We’re done from the hardware’s side – let’s start configuring our Raspberry Pi to communicate with our LCD.

Firstly, let’s enable I2C from the Raspberry Pi Config. Fire up the raspi-config to get started: sudo raspi config

Now navigate to Interfacing Options => I2C => Enable I2C

raspi-config-interfacing-options

raspi-config-interfacing-options-i2c

Now that we’ve enabled I2C communication, it’s time to start development! We’ll need to get some tools before we start working though, so fire up a shell and input:

sudo apt-get install i2c-tools.

Once that’s done, the LCD is ready to be programmed! Let’s make sure that the LCD is properly connected and working. In a shell, type:

i2cdetect -y 1.

The output should be something like the below. Note the number outputted by the command; will be needed later on. We’ll need this address when we’re trying our demo code. In this case, the address is “27”.

i2cdetect

Great! Now, it’s time to test out our display and see if it works! We’ll be using a Github library – https://github.com/albertherd/LCD1602. This has been forked from https://github.com/bitbank2/LCD1602. We’ll be using my fork since the original repository has an unresolved issue with clearing the display.

After you’ve cloned the repository in your working directory, it’s time to use the address (27 in my case) obtained earlier. Open the main.c and find the call to lcd1602Init and change second parameter. This is how it looks in my case:

lcd1602Init(1, 0x27);

Now it’s time to compile and run our code. If all goes well, we should be getting some text on the screen. You can change the text to whatever you’d like by changing the following lines in main.c.

lcd1602WriteString("BitBank LCD1602");
lcd1602SetCursor(0,1);
lcd1602WriteString("ENTER to quit");

Build and run using the following commands:

make
make -f make_demo
./demo

The screen should look like the below:

IMG_20190110_231937.jpg

Great! Now we’ve successfully connected our LCD1602 to our Raspberry Pi and we’re able to output content on it!

In the next part of this tutorial series, we’ll start by capturing the temperature using the sensor in our first part of the tutorial and outputting it! Stay tuned.