-
-
Notifications
You must be signed in to change notification settings - Fork 336
Description
Suggestions for improvements: NVMe volume persistence checks and DSM 7.3+ NIC handling improvements
Hi, first of all thank you for your great work on these scripts.
I am using syno_hdd_db.sh on a DS1821+ with an official E10M20-T1 card and NVMe volumes, and everything works correctly thanks to your patches.
During detailed diagnostics on DSM 7.3.2, I noticed a few areas where the script could be improved or where additional checks would help users avoid problems, especially on Ryzen-based Synology models (DS1621+, DS1821+, RS1221+, etc.).
This issue is not about a bug, but about possible enhancements.
1. DSM 7.3+ network persistence issues with Aquantia AQC107 (E10M20-T1)
On DSM 7.3+, the system partially regenerates the network configuration during major updates or network restarts.
As a result, the Aquantia NIC (10GbE on E10M20-T1) may appear as a new network interface after updates, even if the MAC address remains the same.
This causes:
- VMM losing the assigned LAN 5 interface
- Virtual Switch losing the binding to
ovs_eth4 - DSM generating a new internal UUID for the NIC
This is not caused by syno_hdd_db.sh, but the script could optionally detect or warn when DSM is likely to recreate the interface.
Possible enhancements:
- Detect Aquantia AQC107 PCI IDs (
1d6a:07b1) - Check whether a persistent NIC mapping is in place
- Warn users if the network configuration may be overridden by DSM on next upgrade
- Optionally generate a helper script under
/usr/local/etc/rc.d/to preserve NIC MAC/interface mapping
This could help many users who experience disappearing NICs after updates.
2. NVMe volume safety checks
Users who already have NVMe volumes (not cache) on E10M20-T1 or on PCIe adapters would benefit from additional diagnostics.
Currently the script patches compatibility tables correctly, but does not explicitly check:
- whether NVMe volumes already exist
- whether NVMe devices are mapped correctly in
synostorage - whether DSM will treat NVMe drives as "cache only" after upgrades
- whether
model.dtband runtime device tree are consistent
Suggested improvements:
- Add an "NVMe Volume Status" section
- List existing NVMe arrays and partitions
- Warn if the user is running full NVMe volumes without
-p - Check for missing DTB entries for E10M20-T1 and warn if needed
This would increase safety for users running VMM or Docker on NVMe volumes.
3. Warning for read-only system paths under DSM 7.3+
Some DSM 7.3+ system paths (e.g. /usr/syno/etc/network/, /etc.defaults/, runtime model.dtb) are mounted from squashfs and cannot be modified even after mount -o remount,rw /.
A check for this condition could help users understand why certain patches cannot be applied.
Suggested enhancement:
- Detect filesystem immutability (RO) and warn the user
- Clarify which patches are impossible on DSM 7.3+ due to read-only system partitions
4. Optional automatic safety check: prevent running the script from NVMe volumes
Although the script warns users not to store it on NVMe volumes, many miss the message.
A simple automatic check could prevent misconfiguration:
echo "Checking script location..."
if mount | grep -q "$(dirname "$0")" | grep nvme; then
echo "ERROR: Please do not run this script from an NVMe volume." >&2
exit 1
fiThis would avoid a common user mistake.
Summary
The script works perfectly on my DS1821+ and has been essential to enable NVMe volumes.
These suggestions aim to improve safety, clarity, and resilience for DSM 7.3+ users, especially those using:
- official Synology PCIe cards with NVMe
- Aquantia 10GbE NICs
- VMM or Docker on NVMe volumes
- non-standard hardware configurations
If these ideas are useful, I'd be happy to help test future enhancements.
Thanks again for your excellent work and for maintaining this project!