Howto make Mycontroller sd-card friendly? (Highly experimental!!)

  • MOD


    Did a lot of research, and a great solution exists called log2ram. I used this as the base for the db2ram script.

    !! Caution since ram is volatile, in case of a power outage, all data will be lost!! so make sure to schedule regular backups !!

    add this to the bin/ script, and set the correct directory where mycontroller is located, in this case /opt/mycontroller

    # Set the vatiable below to 1 if we want the database on a ramdisk
    # Usefull if running mycontroller from an SDcard, since putting the database
    # in ram will limit the number of writes to the SD card.
    # Since a ramdisk is volatile, make sure to setup a backup policy that runs every hour
    # The script used is based on log2ram.
    # provide the path to the maind= directory of the mycontroller software here
    if [ "$DB2RAM" = 1 ]; then
       echo "Database will be in ram..."
       $APP_LOCATION/bin/db2ram start

    add this to the bin/ script

      /bin/mount | grep tmpfs | grep mycontroller > /dev/null 2>&1
      if [ $? -eq 0 ]; then
       echo "Database in ram, syncing to disk..."
       ./db2ram stop

    A new file should be created called bin/db2ram and copy all below into the file.
    (set the size of the disk to the required size, im my case 30mb is enough)

    # Configuration file for db2ram, this is based on log2ram ( under MIT license.
    # This configuration file is read by the db2ram service
    # Size for the ram folder, it's define the size the log folder will reserve into the RAM. 
    # If it's not enough, log2ram will not be able to use ram. Check you /var/log size folder. 
    # The default is 40M and is basically enough for a lot of application. You will need to increase it 
    # if you have a server and a lot of log for example.
    #APP_LOCATION=`/bin/pwd | sed 's/\/bin//'`
    # This variable can be set to true if you prefer "rsync" than "cp". I use the command cp -u and rsync -X, so I don't copy the all folder every time for optimization.
    # You can choose which one you want. Be sure rsync is installed if you use it.
    # If there are some error with available RAM space, a system mail will be send
    # Change it to false, and you will have only log if there are no place on RAM anymore.
    LOG_OUTPUT="tee -a $LOG2RAM_LOG"
    isSafe () {
        [ -d $HDD_LOG/ ] || echo "ERROR: $HDD_LOG/ doesn't exist!  Can't sync."
        [ -d $HDD_LOG/ ] || exit 1
    syncToDisk () {
        if [ "$USE_RSYNC" = true ]; then
            rsync -aXWv --delete --exclude db2ram.log --links $RAM_LOG/ $HDD_LOG/ 2>&1 | $LOG_OUTPUT
            cp -rfup $RAM_LOG/ -T $HDD_LOG/ 2>&1 | $LOG_OUTPUT
    syncFromDisk () {
        if [ ! -z `du -sh -t $SIZE $HDD_LOG/ | cut -f1` ]; then
            echo "ERROR: RAM disk too small. Can't sync."
            umount -l $RAM_LOG/
            umount -l $HDD_LOG/
                    if [ "$MAIL" = true ]; then
                            echo "DB2RAM : No place on RAM anymore, fallback on the disk" | mail -s 'DB2Ram Error' root;
            exit 1
        if [ "$USE_RSYNC" = true ]; then
            rsync -aXWv --delete --exclude db2ram.log --links $HDD_LOG/ $RAM_LOG/ 2>&1 | $LOG_OUTPUT
            cp -rfup $HDD_LOG/ -T $RAM_LOG/ 2>&1 | $LOG_OUTPUT
    wait_for () {
        while ! grep -qs $1 /proc/mounts; do
            sleep 0.1 
    case "$1" in
          [ -d $HDD_LOG/ ] || mkdir $HDD_LOG/
          rm $LOG2RAM_LOG >/dev/null 2>&1
          mount --bind $RAM_LOG/ $HDD_LOG/
          mount --make-private $HDD_LOG/
          wait_for $HDD_LOG
          mount -t tmpfs -o nosuid,noexec,nodev,mode=0755,size=$SIZE mycontroller $RAM_LOG/
          wait_for $RAM_LOG
          umount -l $RAM_LOG/
          umount -l $HDD_LOG/
          echo "Usage: db2ram {start|stop|write}" >&2
          exit 1

    After creating the script, make it executable:

    chmod +x bin/db2ram

    (debian / armbian only)
    To sync the data base hourly from cron, add the following in /etc/cron.hourly

    Create a file called db2ram in /etc/cron.hourly/

    vi /etc/cron.hourly/db2ram


    #! /bin/sh
    /opt/mycontroller/bin/db2ram write > /dev/null

    set the correct permissions

    chmod +x /opt/cron.hourly/db2ram

    reboot the system...

    now verify if the database is located in ram with the mount command:

    root@orangepizero:/etc/cron.hourly# mount | grep mycontroller
    /dev/mmcblk0p1 on /opt/mycontroller/conf-hdd.log type ext4 (rw,noatime,nodiratime,errors=remount-ro,commit=600)
    mycontroller on /opt/mycontroller/conf type tmpfs (rw,nosuid,nodev,noexec,relatime,size=30720k,mode=755)

    How to sync the database from ram to disk
    to sync the database from ram to the SDcard, line can be added to the crontab with the command:

    crontab -e

    add this line:

    30 * * * * /opt/mycontroller/bin/db2ram write

    This will sync the ramdisk every 30 minutes
    Dont forget to make regular database backups (maybe even every hour), so in case of a system crash for what ever reason a minimal amount of data is lost.......

  • @tag This is an awesome information. Thank you for your hard work !!

  • MOD


    Thx!, 🙂
    Will polish up the files and attach them to this post.

  • Hi guys,

    I am back now and just trying to pick up where I left off.

    There was similar issue on another system and moving to btrfs and having the system boot from sd but load the system on ysb stick works great (with added advantage that 'live' full backups can be automated via cron. Maybe something to look into?

    What about using a pi hdd? They are not so expensive and could help a lot with this too......

    Just a thought! 🙂

  • MOD


    Great tips!, and there are many solutions out there 🙂
    Currently running the database in memory, it works very well, however there is always the topic of dataloss, therefore I schedule regular backups (every 6 hours). It is a "proof of concept".

    Will definitely look into Btrfs, it seems that it is "SSD" aware, so this might be a solution, need to figure out how my OPI-0 is able to boot from it.

    Thanks for the tips!! thinking along is always appreciated 😉

  • @Tag.

    I just rememeber that Xbian had similar SD card issues. They moved to btrfs and things suddenly became much better. and running live system backups with cron overnight really makes it for me. So simple to restore from server if needed...

    This week I will look into booting pi3 from usb instead of SD card. Will be fun for sure! 😉

  • MOD


    Great!, 👍 but keep in mind that an usb stick also has a limited number of writes per physical bit, maybe more compared to and SD card, but it is still not an harddisk/SSD

  • Did this today, but now waiting for WD pidrive to come back into stock......

    FWIW I still get crashes 😞

    Will post about that in troubleshooting.....

  • Any news about this topic?
    Just a remark: Dietpi has an inbuilt option to make the filesystem only readable.

  • Hi @tag!

    Is it possible to move the RAM also to the InfluxDB database?

  • MOD


    Hi, I assume you mean putting the influx database in ram?.
    yes that should be possible since this is just a ram disk.... but take precautions to powerfailures... power-off = data gone.....

Log in to reply

Suggested Topics