Recently, I was working on a task where we had to get file entries and names off ZIP files stored on Azure. We had terabytes of data to go through and downloading them was not really an option. In the end of the day, we solved this in a totally different way, but I remained curious if this is possible, and it sure is.
The aim is to get all the entry names of ZIP files stored on an Azure Storage Account. Unfortunately, using our beloved HttpClient isn’t possible (or at least, I didn’t research enough). The reason is that although HttpClient does allow us to access an HttpRequest as a Stream, the Stream itself isn’t seekable (CanSeek: false).
This is why we need to use the Azure.Storage.Blobs API – this allows us to get a Seekable Stream against a File stored in Azure Storage Account. What this means is that we can download specific parts of the ZIP file where the names are stored, rather than the data itself. Here is a detailed diagram on how ZIP files are stored, though this is not needed as the libraries will handle all the heavy lifting for us – The structure of a PKZip file (jmu.edu)
We will also be using the out-of-the-box ZipArchive library. This will allow us to open a Zip File from a Stream. This library is also smart enough to know that if a stream is Seekable, it will seek to the part where the File Names are being stored rather than downloading the whole file.
Therefore, all we need is to open a stream to the ZIP using the Azure.Storage.Blobs, pass it to the ZipArchive library and read the entries out of it. This process ends up essentially almost instant, even for large ZIP files.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Currently, I am working on a project that requires zipping and compressing files that exist on a storage account. Unfortunately, unless I am missing something, there is no out-of-the box way how to ZIP files on an Azure storage.
There are two major possibilities that I’ve found are:
Azure Data Factory – It’s a cloud based ETL storage solution. In my research, I found that this tool can cost quite a lot, since you’re paying for the rented machines and tasks. Data Factory – Data Integration Service | Microsoft Azure
Writing a bespoke solution – of course you’ve got the flexibility of doing whatever you want but it probably takes more time to develop, test and such.
Anyway, in my case I’ve decided to write my own application; there were other requirements that I needed to satisfy, which was it too complex for me to implement it in Azure Data Factory. I’ve written the following code (some code omitted for brevity)
CloudBlockBlob blob = targetStorageAccountContainer.GetBlockBlobReference("zipfile.zip");
blob.StreamWriteSizeInBytes = 104_857_600;
using (Stream dataLakeZipFile = await blob.OpenWriteAsync())
using (var zipStream = new ZipOutputStream(dataLakeZipFile))
{
DataLakeDirectoryClient sourceDirectoryClient = dataLakeClient.GetDirectoryClient(sourceDataLakeAccount);
await foreach(var blobItem in sourceDirectoryClient.GetPathsAsync(recursive: true, cancellationToken: cancellationToken))
{
zipStream.PutNextEntry(new ZipEntry(blobItem.Name));
var httpResponseMessage = await _httpClient.GetAsync(GetFileToAddToZip(blobItem.Name), HttpCompletionOption.ResponseHeadersRead);
using (Stream httpStream = await httpResponseMessage.Content.ReadAsStreamAsync())
{
await httpStream.CopyToAsync(zipStream);
}
zipStream.CloseEntry();
}
zipStream.Finish();
}
The following code does this following:
Create a reference to the ZIP file that is going to be created on the Storage Account. I also set StreamWriteSizeInBytes to 100MB; the largest. I never experimented with other figures. This refers to how much data to write per block.
Open a Stream object against the zip file. This overwrites any file with the same name.
Get all the files you need to ZIP. In my case, I am using the DataLake API because our files are on a Storage Account with hierarchical namespaces activated. This will work just as fine if your Storage Account doesn’t use hierarchical namspaces (you can just swap out and use the CloudBlobContainer API).
Open a new connection to the destination file and fetch it as a stream.
Copy the data received from the stream to the zip stream. This translates into HTTP requests, uploading it back to the Storage Account.
Close down all resources when its done.
Importantly, the code downloads files from the storage account and instantly uploads it back to the storage account as a ZIP. This does not store any data on physical disk and uses RAM to buffer the data as its downloaded and uploaded.
Of course, this part is just an excerpt of the whole system needed, but it can be adapted accordingly.
Please treat this guide as a beginner starting guide – you’ll need to spend a lot of time tweaking, especially on the curve optimizer.This is not an ultimate overclocking guide and some people might (and already did) not agree with the values and flow of this guide. Having said that, even if other approaches may be better, they will be slightly better, maybe 1-3% better, within margin of error. Following this guide WILL net you a performance gain; maybe not the BEST performance gain but a measurable one.
The following guide should work for the following CPUs:
Ryzen 9 5950x
Ryzen 9 5900x
Ryzen 7 5800x
Ryzen 5 5600x
The following should similarly work for Ryzen 3000 series, but you will not have access to the Curve Optimizer. Blame AMD for this.
Ryzen 5000 – Traditional overclocking is dead
Traditional overclocking involved going into the BIOS, typing a nice voltage and a reasonable clock speed and you are done. You can do it, and you will get a nice score in Cinebench, but you’ll lose everyday performance. Why?
By turning on a fixed max boost clock, you will be losing the higher boost clocks achieved when doing lightly threaded workloads (unless you manage to overclock to a fixed 5Ghz..if you do that, please write a guide for us!). These are the kind of workloads that you go through every day, which are the most important. If you set a max overclock of say 4.6 GHz, you won’t be able to go over 4.6 GHz in common tasks, which will slow them down.
Ryzen’s boost algorithm is smart
On the other hand, Ryzen’s boost algorithm is designed to go past the usual clocks and boost as much as possible, given there is enough power coming in and the temperatures are in check. Trust the AMD engineers in this case. In my case, my 5900x is easily able to go past 5GHz.
The golden trio – PBO2, Power Settings and Curve Optimizer
In order to achieve an actual “overclock” on Ryzen 5000, we’ll need to dive into three major components – PBO2, Curve Optimizer and Power Settings
Precision Boost Overdrive 2
Precision Boost Overdrive (PBO for short) is when you extend the out of the box parameters that dictate performance on a Ryzen CPU – Temperature, SoC (chip) power and VRM Current (power delivery). PBO extends the maximum threshold for these components, allowing faster clock speeds to be achieved for a longer time. In short, this is AMD’s inbuilt overclocking capabilities baked into your CPU.
Here, I’m referring to the three major power settings – PPT, TDC and EDC. PPT is the total power that the CPU can intake. TDC is the amount of amperage the CPU is fed, under sustained load (thermally and electrically limited). EDC is the amount of amperage the CPU is fed, under short bursts (electrically limited). Allowing the CPU to take more power overall allows the CPU to boost to higher clock speeds. From the PBO triangle analogy, this positively impacts the left and right vertices – SoC power and VRM Current, while negatively impacting the top vertex – heat.
Curve Optimizer
Curve optimizer allows you to undervolt your CPU. Undervolting means that you’re pushing slightly less voltage, which consumes less power and generates less heat. This, combined with Precision Boost Overdrive 2 means that you’re pushing less heat, allowing the CPU to boost clock speeds. From the PBO triangle analogy, this mostly impacts the top vertex – heat.
Striking a balance with your settings and overclocking your Ryzen 5000
Now, that we’ve established our three main players, let’s tackle them one by one. To access these settings, you’ll need to access your BIOS – these settings are typically located in Advanced -> AMD Overclocking -> Precision Boost Overdive. Here’s a sample from my ASRock x570 Steel Legend.
After discussions with my readers, people seem to be suggesting different priorities when it comes to overclocking. I believe that a modest yet stable overclock can be achieved by prioritizing these:
Scalar / Max CPU Override
Power Settings
Curve Optimizer
Some readers believe that the best priority is:
Curve Optimizer
Power Settings
Scalar / Max CPU Override
If you are confused like me, pick the easiest and consider following this guide. Both will provide a nice performance gain and the differences you might see from one method to another may be in 1-2% more gain, which is negligible in real life.
Allows us to turn on PBO and allows us to make manual adjustments to PBO settings
PBO Scalar – 10X
Should allow you sustain boost clocks for longer.
Some readers debate whether this value should actually be 1x; I cannot verify this. These readers debate that setting it to 10x will raise your overall voltage. During my brief testing, I’ve observed that this is not the case, but this statement can (and might change) with more testing
Max CPU Boost Clock Override – 200Mhz
Raises your max frequency by 200Mhz. On a 5900x, this translates to a theoretical limit of 5150Mhz, which is realistic.
I am told by my readers that setting a +200 boost on the Max CPU Boost Clock Override might negatively impact how much you’ll end up pushing on the Curve Optimizer. Unfortunately I’ve not neither the time or data to back up this fact.
PURE SPECULATION / MY THOUGHTS AHEAD (No data to back up this claim whatsoever) -By reducing the Max CPU Boost Clock Override, you’ll of course be losing the highest single core boost clock speeds, **potentially** reducing single core performance, but you’ll be able to push more multi core score, or reaching the “lower” max single core performance more regularly. These will require extensive testing separately (and probably translate into margin of error when it comes to results).
Power Settings
In their slides (link above), AMD suggest using Power Limits = Motherboard. I strongly discourage this as it may limit your power intake (this was noticed both by me and readers in my blog – My Experience with Precision Boost Overdrive 2 on a 5900X – Albert Herd, comment by Julien Galland).
For my 5900X, these are the settings that I’ve applied. If you got a 5950X, 5900x 5800x, these values may (or may not) be suitable for you. If you got a 5800X or lower, these values are too high and will hinder performance. Applying lower settings to accommodate your CPU – apply a decent bump to the values quoted below by AMD. Unfortunately, I don’t own anything else apart from a 5900X so I cannot vouch for these settings for other models.
If you got very good cooling (such a custom loop or strong cooling in general)
PPT – 185W
TDC – 125A
EDC – 170A
If your cooler will get too hot with these settings, try a more conservative setting. In my case, this setting hovers around 70-75C
PPT – 165W
TDC – 120A
EDC – 150A
You might notice that your CPU might run too “cool” or too hot. In this case, adjust your figures accordingly. In a multi core benchmark, these figures should all hit a 100%. In most workloads, its the EDC that plays a role, not TDC (since most workloads are considered as short burst). I also noticed that going too low on EDC will cause instability.
Leave SOC TDC and SOC EDC to 0, these should not impact us (I believe this mostly applies for APUs).
For completeness sake, please keep in mind AMD’s default values when making adjustments to these values:
Package Power Tracking (PPT): 142W 5950x, 5900x and 5800x and 88W for 5600x.
Thermal Design Current (TDC): 95A 5950x, 5900x and 5800x and 60A for 5600x.
Electrical Design Current (EDC): 140A 5950x, 5900x and 5800x and 90A for 5600x.
Curve optimizer
This is probably the most annoying one. The numbers you’re inputting here will vary significantly from one chip to another, so your mileage may vary. These are my values:
Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
Negative 17 for the other cores.
If you want to start safe, you can apply a Negative 10 offset on all cores.
Testing this setting is extremely painful. You’ll notice that crashes will not happen under load; crashes will happen under idle conditions, where your CPU undervolts too much. Hopefully, AMD will look at this algorithm in future BIOS updates and provide more stability. In my experience, Geekbench 5 – Cross-Platform Benchmark is a great tool to stress my CPU out, it tends to crash it when the settings are not right.
Please keep in mind the note that I’ve written about the Max CPU Boost Override (under the header – Precision Boost Overdrive 2). Some users note that they prefer to keep Max CPU Boost Override lower and push for a more aggressive curve.
In my next post, we will look at how to get the best performance from your RAM, by applying specific DRAM configurations according to the RAM sticks you own. If you feel adventurous and feel like you can do it on your own:
Looking for the TL;DR? These are my everyday settings:
PPT – 185W, TDC – 125A, EDC – 170A. To run these power settings, you’ll need a beefy cooler. If the CPU gets too hot with these power settings, try PPT- 165W, TDC – 115A, EDC – 150A
Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
Negative 17 for the other cores.
These moved my multithreaded Cinebench R20 score from 8250 to around 8800-9000 (6-9% gain) and my single threaded Cinebench R20 score from 630 to 650 (3% gain).
To get started, you will need to navigate to the BIOS. Unfortunately, now you cannot use Ryzen Master to do this, but AMD claims that this will be part of Ryzen Master in their future releases. In the PBO section, you will need to adjust some settings.
Navigating to AMD Overclocking in the BIOS
My specs are as following:
AMD Ryzen 5900x
ASRock X570 Steel Legend
32GB C17 Memory
750w PSU
240MM AIO from BeQuiet.
At first, naively, I’ve set the power limits (PPT, TDC and EDC) to 0, which means unlimited. This in turn has a negative effect. It will let the CPU get as much power as it can. This translates into unnecessary power consumption, which will limit the maximum clock speed achieved. I’d suggest sticking to values which will keep the CPU under (or close to 80C under full load).
In my case, the maximum power settings I manage to sustain are: PPT – 185W, TDC – 125A, EDC – 170A. The recommended values for your CPU will vary according to the silicon quality and the cooling provided. Cooling 185W is not an easy feat, you’ll need a good cooler (such as a good NH-D15 (noctua.at), some good AIO (I am using Pure Loop | 240mm silent essential Water coolers from be quiet!).
Setting the PPT, TDC and EDC in a well balanced value is extremely important, this will help you strike the balance between the power consumption needed by the CPU while maintaining realistic temperatures. If the CPU gets too hot with these power settings, try PPT- 165W, TDC – 115A, EDC – 150A
I have set the PBO scalar to manual and 10x. I will be honest I am not sure what impact this has, but it looks like a setting which needs tweaking. I’ve tried 1X and honestly I did not feel any difference. From what I can understand, this is the length of how much the CPU will remain pumping high voltage / clocks until it dials it down. In burst scenarios, this should not have any impact.
Max CPU Boost Clock Override should be set to 200MHZ. This allows for higher clock speeds on single threaded workloads. My 5900x can hit 5.15 GHz with this setting on a single core. 5.15 GHZ is not a one-off number. I regularly see this during light workloads
Navigating to the Curve Optimizer in BIOS
Now, for the most important part: The Curve Optimizer. For the best and second core for each CCD, I have set this to negative 10, and for the other cores I have set it to minus 15.
The next step is quite difficult to instruct, as it purely depends on your silicon quality. In my case, I found the following settings to work for me:
Negative 11 for the first preferred cores on CCX 0 (as indicated by Ryzen Master)
Negative 15 for the second preferred core on CCX 0 (as indicated by Ryzen Master)
Negative 17 for the other cores
It took quite a lot of testing to arrive to these figures. You can find the first and second preferred cores from Ryzen Master.
Per Clock adjustments in the Curve Optimizer
Firstly, I started with negative 20 on all cores. This resulted in awesome Cinebench R20 scores but poor stability. I have then went to negative 15 on all cores. This was not bad, but I was experiencing a crash every now and then, especially when the PC is running cold and is able to push more clocks. It would run all day, but on boot, pushing it will instantly result a crash. This tells me that the algorithm was trying to push for more clocks, but the undervolting was too aggressive.
I then went to negative 10 on all cores and it is fully stable. Finally, I pushed negative 15 for those cores which are not first or second. This remained stable, and eventually I started changes the values slightly everday. Sometimes I go too much and get a WHEA BSOD (especially when the PC is cool and under light workloads).
These moved my multithreaded Cinebench R20 score from 8250 to around 8800-9000 (6-9% gain) and my single threaded Cinebench R20 score from 630 to 650 (3% gain). These are small gains, but when they are coming at you with no cost, it’s good to take advantage of it. And yes, these do not really translate to any tangible performance uplift in everyday computing.
Preferred Cores (Star is 1st, dot is second)
The performance uplift is thanks to higher sustained clocks. With PBO turned off, I was sustaining around 4.1 GHz core clock and with PBO on, I am sustaining between 4.4-4.5 GHz in Cinebench R20.
Cinebench scores with PBO2
Full load under Cinebench R20
Simpler workloads (non AVX) will clock past 4.5 GHz. I suspect that Ryzen calms down the clocks by a bit during AVX workloads, but I cannot confirm this.
Full load under a synthetic load – Memtest 64
Please let me know your experience with PBO2 and whether you find this post useful. If you got better settings than mine, I appreciate the feedback! Of course, keep in mind that as AMD said, no processor is the same; some might need more voltage than others to remain stable. It also depends on the power delivery quality, the sustained temperatures, the quality of the thermal paste, the overall case temperature and a plethora of other things, as mentioned in the first link to AMD’s site.
..because it doesn’t have enough privileges to do so! Starting from Windows 8.1, Protected Process Light (PPL) was introduced. Protected Processes were implemented in Windows Vista (and was mostly focused on DRM), but it was greatly improved, such has having different levels of protection, depending on the application.
There are multiple uses of PPL, but for this post, let’s focus on Antivirus software. We’ll also not be diving deep into how to develop this. If you’re developing this, you probably know far more than me and this blog.
Since Antivirus (AV) software are at the forefront of stopping viruses from harming machines, it’s a very common target for viruses. Normally, AVs place a lot of rules and heuristics inside the application to protect against such threats, but PPL now enables antivirus software to run as a Protected Application under the PPL scheme.
There are multiple levels under the PPL Scheme: from 0 to 7. 0 is no protection and 7 is maximum protection. The kernel is protected under level 7, critical windows components are level 6 and antiviruses run at level 3. Processes with a higher level have more power and will trump over lower levels in terms of accessibility. So, an antivirus cannot terminate a critical windows process, since level 6 is higher than level 3.
Not all Windows Components are protected under the PPL scheme, critical applications such as ssms.exe, csrss.exe, services.exe are under the PPL scheme, running at level 6. Applications such as Task Manager is not under the PPL scheme, and for a very valid reason.
The PPL scheme allows such AVs services to be launched and protected from unloaded untrusted code. Given that an AVs have the PPL value of PROTECTION_LEVEL_CODEGEN_LIGHT (https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/ns-processthreadsapi-process_protection_level_information), all DLLs that it loads needs to have a equivalent DLL Signature Level or higher. This ensures that no DLL foul play, such as file replacement or plating takes place. These checks are done at DLL loading level by the Code Integrity Windows component.
If you’re looking to create an application under the PPL scheme, you’ll need to get it signed by Microsoft. Given that only Microsoft can sign applications to contain these kinds of protection levels, viruses can’t have this kind of protection. Keeping in mind that PPLs run more privileged to non-protected processes (even those applications running as Admin), viruses simply cannot terminate AV processes.
You can see this for yourself – fire up Task Manager (even if admin mode if you’d like) and try to close MsMpEng.exe (Windows Defender Service). If you try to terminate the process, Task Manager will just say “Access Denied”. This is because since Task Manager is not under the PPL scheme, it simply doesn’t have enough rights terminate the AV.
Microsoft Cognitive Services is a rich set of AI services, such as Computer Vision, Speech Recognition, Decision making and NLP. The great thing about these tools is that you don’t really have to be an AI expert to make use of these tools, as these models come pre-trained and production ready. You’ll just feed it your information and let the framework work for you.
Let’s look at today’s scenario – we’re a fictitious bank which processes bank cheques. These cheques come hand-written from our clients, which contain instructions on how to transfer money from one account to the holder’s account.
A cheque typically has the following information:
Issue Date
Payee
Amount (in digits)
Amount (in words, for cross reference)
Payer’s account number
(Other information, which was omitted for this proof of concept
This is how our fictitious cheque looks when we’re looking at the regions we’re interested in, represented in bounding boxes.
Cheque Template with bounding boxes representing areas of interest
Let’s consider these three handwritten cheques.
The attached application does the following analysis:
Import these cheques as images.
Send the images over to Microsoft Cognitive Services
Extract all the handwriting / text found in the image
Consider only those text which we’re interested in (as represented with bounding boxes previously)
Forward this extracted information to whatever system needed. In our case, we’re just printing them to screen.
The below is the resultant information derived from the sample cheques.
Results of Cheque Analysis
Most of the heavy lifting is done by the Microsoft Cognitive Services, making these AI tools available to the masses. Of course, with a bit more business logic, the information that can be extracted from these tools can be greatly improved, making them production ready.
As with the previous example, this example uses the TPL Dataflow library, which is an excellent tool for Actor-Based multithreaded applications.
Let’s face it – staying at home under quarantine and lockdown due to COVID-19 is not fun at all. You might want to watch some videos in sync with your loved ones. You can take the silly and manual way and say – 3,2,1, PLAY! That may work, but if someone needs to pause, you’ll need to go through that silly 3,2,1 again! What if there are tools to help you out?
Turns out, there is Syncplay – https://syncplay.pl/. Syncplay is a free tool that allows you to synchronize what video you’re watching, even if you are miles away. It works with modern playback tools; I’ve tested this tool with VLC and it was great.
Let’s set it up! -Firstly, go to https://syncplay.pl/ and download your version. In my case, we’ve tried it on both Windows and Mac and it was great. Once you’ve downloaded it, fire the app up. You’ll need a server so that everyone can connect and play. Syncplay provides free servers to connect to, here’s a list:
pl:8995
pl:8996
pl:8997
pl:8998
pl:8999
Fill out the following fields:
Server Address -> Pick one from the list above
Username -> Just your name so your friends can know who is who
Default Room -> This should be unique so that only your friends can join. Just pick a fancy name which is unique to you.
Path to media player -> Here, we’re telling Syncplay which application to use to play videos. I’m using VLC – find the path of the application and save it.
I also recommend “Enable shared playlists” but this is completely optional.
Syncplay – configuration and connection
Once you’re done – hit “Store configuration and run Syncplay”. If all goes well. Syncplay will show the below screen, and your media player should start. In my case, we’re two people in the room.
Two people int the same room
To start playing a video, go to Syncplay -> File -> Open Media File -> and navigate and load your file. Make sure that everyone loads the same file with the same name. You’ll get a notification if the file is not the same, such as the below.
File name, size and duration is not the same; you’ve loaded a different file.
If the same file name with the same length is opened, you’ll see the below.
Files are the same – just waiting for people to be ready
Once you’re ready, from Syncplay, press I’m ready to watch!
Checkbox to show you’re ready
When everybody is ready, you can proceed and play! Any one person can play and pause video for all the other users in the room. Enjoy synchronized playing!
You can also comment during playback and it will appear in both VLC and Syncplay – it’s like you’re almost next to each other!
Chat appearing both in VLC and Syncplay
There are many other features you can explore in Syncplay:
Let’s immediately jump through the points, then some back-story.
Only submit credit card details to make a purchase on shops which are super famous, such as Amazon, eBay, official product merchants
Use 3rd party paying capabilities where possible, such as PayPal. With this method, the merchant never has access to your credit card details, only to the authorized funds.
Ideally use a debit card rather than a credit card online. In case your card gets stolen, the thief can only use the available funds, without ending up in debt.
Also, only transfer the money your’re going to use when making the purchase. The card should empty in general.
Use applications such as Revolut which have capabilities to enable / disable online transactions on demand.
Revolut also allows for disposable credit cards – the number changes every time you make a transaction. This means that even if your credit card is stolen, the card is now dead and worthless. You’ll need to pay a monthly fee, though.
Avoid saving credit card details on your browser. Although usually CVV is usually not saved, writing it down again won’t take long. This avoids the possibility of a virus sniffing down your credit card details if they are saved.
Why am I writing these? I’ve just stumbled on a Cyber Security Episode done by MITA / Maltese Police (great initiative, by the way). Although the content made sense, I felt that some practical points were missing (original video here). The premise of the video is to only buy from sites using HTTPS as it’s secure and you’ll know the seller. Some points on the premise:
Running HTTPS ONLY GUARANTEES that the transmission between you and the merchant is secure. It does NOT mean you truly know who’s responsible as a merchant. There are ways to “try” and fix this (through EV certificates) but EVs are probably dead as well.
What happens after your credit cards are securely transported is unknown. The following may occur:
Merchant might be an outright scammer running a merchant site with an SSL certificate (which nowadays, can be obtained for free, https://letsencrypt.org/)
Merchant might store your credit card details insecurely; he may end up getting hacked and credit card details will get stolen.
BOV has a problem – Revolut is seriously eating through its card payments business (and profits). Revolut in Malta is very famous and for a very good reason. No, it’s not because it provides free, VISA / Chip / Contactless cards; BOV does that as well. It’s because Revolut’s mobile app is just awesome. BOV seems to (finally) notice this and they have Revolut at their sights. How can BOV one-up Revolut? Tap-to-pay purchases!
BOV have been working on their now BOV PAY app – this will enable a mobile user to make purchases by tapping their phones. Mobile vendors have already developed this though; the likes of Google Pay, Apple Pay, Samsung Pay already exist. But here’s the problem – in Malta, only Apple Pay is officially supported, so non-android users are hung out to dry (unless you follow my unofficial guide here)
Smartly enough, BOV prioritized launching an app for Android firstly. The reason is simple – since there is no official tap-to-pay application in Malta, they’re tapping (pun intended) into an untapped market. So, android users, rejoice!..or should we?
You see, the application is in a very early phase; I’d say it’s just an MVP. The application is sluggish, the user experience is quite rubbish and the app is riddled with bugs. Tap-to-pay works, but the experience is frankly rubbish. Once I tapped my phone to pay, the app took more than 5 seconds to simply show the BOV screen and the card that I’ve paid with. But alas, it works.
Let’s discuss the next most infuriating thing. Honestly, once they get this (properly) working, I’ll seriously consider switching back to use BOV cards instead of Revolut cards daily. There is NO WAY to INSTANTLY see the transactions that I’ve affected!
Well, firstly, the new BOV PAY app does not show any transaction whatsoever. I’m guessing that this is a work in progress, and I should be using the older app to check the transaction. OK, fine. But, still, the old app does not enable me to INSTANTLY see my transaction. I’m able to see that there is a difference between book balance and available balance, but the transaction does not show up. This is PAINFUL – show me that I’ve made a payment, even though the payment may be reverted if unclaimed by merchant. It’s my right to see where my money went! UGH!
Anyway, let’s dive into some other criticism about the app; in no order:
It takes at least 5 seconds to get the BOV logo and the loading spinner
It takes another 5 seconds (or more) from successfully recognising my fingerprint into the main menu
I had to add all my personal details and sign up. Why did I had to provide my name, address and other information? Can’t this be derived from ownership of the card? Or maybe through the authentication of the classic BOV app?
When I press back, I prompted to log out; why? Isn’t it obvious that I’m the only person who’s using this app? Logout should be buried somewhere; I don’t need to see it every time.
The card controls feature is very weak – it only has ONE option. I’m guessing that this is WIP – no harm, no foul.
Again, I’m not sure why through the Review Card Details I’m able to edit information such as the expiry date, CVV, address. This needs to be done the other way round; I’d prove who I am and cards are added automagically for me and the relevant information is obtained from your records. This is a VERY WEIRD feature.
From a UI Experience. why does load the picture of the card each time? I’m always looking at a spinner before the card image is displayed.
Expiring sessions? Really? What’s protecting me from? As soon as the app is dismissed, THEN and only THEN the session should be terminated.
The locator feature; I’m assuming that this shows a list of ATMS? Poor naming?
The setting section looks very basic; but I guess it gets the job done. OK
The about us is very cheeky feature; it’s just a WebView of the BOV Site. Lazy!
Tutorials
The Pay-In-Site is a good and easy tutorial; the button to go to the cards section doesn’t work.
Using the App is again, a WebView of the BOV site.
Contact us – again, WebView.
Terms and Privacy – All WebViews (oh, did I mention that the site is NOT mobile friendly?)
I also must mention the fact that I’ve forwarded much of the feedback to BOV and they instantly got back to me over the phone to listen to my feedback. I must give credit where it’s due. I’m hoping that they the feedback that the clients are giving to them so that they can create an awesome app.
If you spin a MySQL Docker container, you’ll notice that once the container is stopped, all the information is lost! In order not to lose any information from your MySQL Docker database, a volume will need to be attached to the container. Let’s do that!
Let’s create a new volume – this will be used to store all your database informaton
docker volume create mysql
Once the volume is created, it can be attached to a newly spun MySQL container
docker run --name mysql -e MYSQL_ROOT_PASSWORD=albert --mount source=mysql,target=/var/lib/mysql -d mysql
Any datatabses created on this MySQL instance is now preserved! Let’s test it out. Let’s connect to the container and create a new database:
docker exec -it mysql mysql -uroot -p
Supply your password (in this example, the password would be the value that we’ve supplied for the MYSQL_ROOT_PASSWORD – albert
Once you’re connected, let’s see the current databases. The default installation will have 4 default databases