top of page
  • Writer's pictureFred

Synology DS Quirks

Updated: Jan 4

I have been running my own Synology DS918+ NAS since around March 17th of 2020 in a kinda OK mode.

I used to have a QNAP TS-563 system but the motherboard on that thang died after giving me griefs for a long period of time and it was a thoroughly horrid QNAP experience I will not ever be repeating again.

So in comparison to the QNAP it seemed much better as a NAS and it was. Only it still did strange things every now and again....

The QNAP itself had finally died in the late December of 2019 and I got a replacement at the start of the pandemic on St Paddy's day 2020 because I could see it was obvious this lock down crap was going to be a 2 year lock down affair and my pals think I am psychic due to the things I advised them all to do with their home networking and NAS setups in lieu of said pandemic shenanigans that came after that initial lock down bollocks to allow the bureaucrats to "catch up with the Covid curve" whatever the fuck that bollocks meant....

Many of my technology disciples bought all sizes of Synology system for their home use cases after we could get stuff the Government requisitioned as "Pandemic only emergency gear" (basically everything) and I got to setup a good few of them as well as a myriad of various WiFi systems - over 100 different ones in fact.

I think this little task kept me from visiting the far side in the early Scamdemic actually.....

My own use for the NAS thang was storing mainly SciFi movies as a movie library and general file server use serving my active work files or long term archive storage for all my current and past work related stuff so that I had a copy on my Laptops or desktops as well as on a NAS device running RAID 5.

I transferred the 16GB RAM from my failed QNAP to the new Synology NAS and it seemed to work fine at first, though I did get occasional stalls and weird freezes when watching movies and doing Visio work at the same time on the new Synology NAS FRED thang.

Of late, the DS918+ has gone unusable with the blue power light flashing with no Ethernet access possible and all green disk LED's displaying a Solid green lock.

I read up on the fault condition, removed all disks including NVMe SSD as suggested in the baseline troubleshooting guide and it started backup AOK - al cruda, sans disks.

I had put two Crucial P2 M2 2280 500GB NVMe in it after the last pair failed in 2022 but I suspect the Cache 2 SSD has failed again despite the lack of reported issues in the Synology DiskStation Manager.

This was because that was the last device I removed before it would boot up again.

I put them back set by set until it did the flashing blue power light thang and it was that NVMe SSD causing the shit, so by a process of elimination that was determined to be the fail device.

While reading up on these issues this time around, I noticed that a lot of folks and Synology themselves made comments that they did not support the DS series armed with 16 GB of RAM. PERIOD.

This seems to be the case with all Celeron based Synology NAS systems in fact.

As a result of this info I dived into why this was so and did my own research and testing.

To my amazement, the darn thing runs way better with just 8 GB of RAM in the bastard thang!

The Intel ARK specs on the Celeron found on the DS918+ motherboard says it supports 64GB of RAM but there is a foot note that clips the wings for Linux use.

DSM is a Linux Kernel, in fact.

This is true for ALL the Intel Celeron CPU Synology was using for all their stuff as well by the way.

I did a quick poll of the many dozens of systems I had installed and found the ones with 8 GB RAM all worked flawlessly sans any issues at all.

I got a strong lamp and interrogated the owners and examined their log files while looking at em all Spanish inquisition style.

The Volume Cache on the DSM system does not work properly with 16 GB RAM for some reason and Synology tech support tell you that if you are having these issues and will not help you further with your maladies with it either.

So I pulled one of the 8 GB RAM Sticks and did some testing as well as order two x 4 GB SODIMMs for it from Amazon for $22 each.

On 6/21/2023 one of the DIMMS turns up. As useful as tits on a bull that shit..... again.

The second one in the set arrived on 6/22/2023 and the wind had swept it halfway down the street - lucky I went looking for it...

The two 4GB A-Tech DDR SODIMMS I bought for it on Amazon are doing better than the 1 x 8GB SODIMM stick I had reduced it to.

I now have 16GB SODIMM RAM (2 x 8GB) spare to sell any QNAP NAS user.

As I said, the NVMe SSD in Cache slot 2 seems to be the problem child but again no fault detected with it per Se in DSM Storage Manager and it actually reports healthy?

It seems that with 16GB of RAM in a DS series NAS, there is a lot of unnecessary writing to RAM and both the NVMe SSD in it if you have armed both M2 slots like I did.

A lot of it looks like errors and retransmission of data stuff between the RAM and the SSD layer.

This wears out the Cache 2 NVMe SSD and explains why I have toasted 4 of them already.

This means there is a HUGE pile of READS and Writes going on with index table swapping and such between RAM and SSD.

I observed way more action between RAM and SSD at 16GB than at 8GB RAM.

I saw my CPU going to 88% busy often which should never happen on these things.

After I pulled off a full data scrubbing with 8 GB of RAM and the new 1 TB SSD for Cache 2, the thing is not only faster but is now a lot more stable.

As you can see, After Data Scrubbing while watching a movie and transferring 243GB Data to the NAS, the CPU gets no more busy than 8%.

I had to do the Data Scrubbing several times because I also decided to watch a movie while it was doing this which is a big no no while doing DSM Data Scrubbing.

There can be nothing going on at all on the NAS while Data Scrubbing is going on, it will freeze the DSM OS and you will reboot several times, so forget it while doing the data scrubbing lark.

I learnt this the hard way on 6/19/2023, which was my day off from work.

The people that I found on Reddit that persisted with 16GB RAM also bitched how quickly their SSD wears out......Duh!

I had been using Crucial NVMe 3 SSD (P2 M.2 2280 500GB).

I had a spare Kingston KC3000 NVMe 4 which will operate in NVMe 3 mode so I stuck that in and activated the Cache.

Synology say the NVMe must be 4K Native, which most NVMe are not but they can be made that way with some Linux CLI jiggery.

In windows run the CMD as Administrator and run: fsutil fsinfo sectorinfo C:

The Logical bytes per sector and the others must display 4096 to be in 4KN mode (4K Native).

There is no way to do this without toasting all data on it by the way, so you will need to re-install Windows 11 on it again if doing this 4KN thang on a Windows machine.

For DSM use you will need to use Linux tools to get it to 4KN mode.

You will also not see much difference in performance but the device will last longer in 4K native mode.

There is an infuriating tool called HUGO you can get from WD for this as well by the way but expect the runaround from this crap software!

This 4K Native mode is actually quite irksome because the Windows native tools will set it to NTFS and you can select 4096 from the quick format options.

However, NTFS means nothing to Linux as a boot device so you need to run some Linux CLI commands to force the 4KN native mode.

Linux will not install or boot from an NTFS drive, which is a bit of a drag actually...

You need to download Ubuntu to a USB disk and use the disk utility tool to do this to the target SSD.

This assumes the computer you are using has 1 or 2 x M.2 NVMe SSD slots of course....

After much prancing around like a prima donna walking a bed of hot coals and dithering like a duck torn between worms and bread I decided I would not actually make the 4K Native change until one of the SSD next wears out.

The SSD to RAM issue is now 3000x lower than before and the CPU in normal ops for storage and video streaming seldom gets over 8%.

As such I am inclined to adopt the "if it ain't broke don't fix it" MO here.

However, I have done it on 3 of my pals systems (suckers), who were all totally aghast at not being in 4KN mode in this day and age (Total Geek Metrosexuals them three characters) so I used them as Guinea Pigs before I go there myself.

I used the Synology for a week for consecutive multi-role purposes from 8 devices all running different Movies, two for work Visio and AutoCad use on separate Workstations and I even ran my Threadripper rigs indexing the entire NAS.

If I had done this when it still had the 16GB RAM in it, all workstations woulda hung it dead.

I even went to my Smart TV and played videos on my new 120" Sony 8K TV that were all on the NAS while the Workstations were beating it up.

Not one glitch while I watched the Fans cut of Prometheus which was a nearly three hour long Movie.

I enjoyed it so much I forgot all the other shit it was doing and amazingly The NAS performed real swell.

The only shit I had with any of my systems since the NAS was fixed was the RM1000i PSU in my Ryzen 5900 Company use Workstation started freezing 5 seconds after reboot and event viewer showed ID 41 which was actually really PSU failure based this time round.

I ordered a new Gamestop 850W RGB PSU on 6/29 for $99 with a $10 discount coupon and it was delivered at 7 AM on 6/30 and I was actually ripping out the old 1000i Corsair as it arrived via Amazon chappie in his personal Chariot.

It was up and running again before 8:20 AM.

July 24th 2023 Update: The thing just runs and serves files without a hiccup now.

Cutting the RAM back to two 4GB DDR chips was all it needed to jump into the super reliable appliance mode of operation.

I have tried to get it to lock but cannot, it just does its thang, as it should!

Happy Days are here again, the skies above are clear again!

It is now January of 2024 and I am happy to report that the thing is running sweet as a nut with zero issues at all and I have done some serious workload stuff on it from 20 devices consecutively these past 6 months.

I came into possession of several old NetApp 2950 systems which were never deployed by the former customer but htese things are so noisy and power hungry I am not going to deploy them in my downstairs data center.

I did spend time between Christmas and new year's re-configuring my downstairs lab and will do the same in the Eden Utah house this weekend where I will deploy the 2950 systems in the Aircraft Hangar.

The Hangar is also completely off the grid there and has a large battery backup schema for the electric motor-glider and several EV systems there.

The available usable storage is 22TB per 2950 configured in local mirror mode Active-Passive style.

I backup my NAS to that NAS from Brentwood (in North California) over a dedicated Starlink Satellite connection and closed my AWS account as it became too expensive to manage with the almost zero use it got.

The Starlink is much better than internet links and does not have DNS issues I see on Xfinity from time to time.

It seems to do things faster despite the slower link speed but I think this is because the link connection is set and does not do any DNS flapping.


bottom of page