Howto make Mycontroller sd-card friendly? (Highly experimental!!)


  • MOD

    Hi!,

    I am running mycontroller on an OrangePI zero, this systems main storage is an SD card.
    Last week my sd card failed, and i had to setup everything again (Mysgw/mycontroller) Then i was wondering why the SD card failed, one of the issues is the number of writes to the card by linux and mycontroller.

    Since Mycontroller is doing a number of writes to the SD card, would it be possible somehow to limit the number of writes to the SD card. (using the standard setup with the standard database)

    The benefit here is that the SD card will live longer....
    Also RPI users will greatly benefit from this...

    HOWTO:

    !! Caution since ram is volatile, in case of a power outage, all data will be lost!! so make sure to schedule regular backups !!

    add this to the bin/start.sh script, and set the correct directory where mycontroller is located, in this case /opt/mycontroller

    # Set the vatiable below to 1 if we want the database on a ramdisk
    # Usefull if running mycontroller from an SDcard, since putting the database
    # in ram will limit the number of writes to the SD card.
    # Since a ramdisk is volatile, make sure to setup a backup policy that runs every hour
    # The script used is based on log2ram.
    
    DB2RAM=1
    
    # provide the path to the maind= directory of the mycontroller software here
    APP_LOCATION=/opt/mycontroller/
    
    if [ "$DB2RAM" = 1 ]; then
       echo "Database will be in ram..."
       $APP_LOCATION/bin/db2ram start
    fi
    
    

    add this to the bin/stop.sh script

      /bin/mount | grep tmpfs | grep mycontroller > /dev/null 2>&1
      if [ $? -eq 0 ]; then
       echo "Database in ram, syncing to disk..."
       ./db2ram stop
      fi
    

    A new file should be created called bin/db2ram and copy all below into the file.
    (set the size of the disk to the required size, im my case 30mb is enough)

    #!/bin/sh
    # Configuration file for db2ram, this is based on log2ram (https://github.com/azlux/log2ram) under MIT license.
    # This configuration file is read by the db2ram service
    # Size for the ram folder, it's define the size the log folder will reserve into the RAM. 
    # If it's not enough, log2ram will not be able to use ram. Check you /var/log size folder. 
    # The default is 40M and is basically enough for a lot of application. You will need to increase it 
    # if you have a server and a lot of log for example.
    
    #APP_LOCATION=`/bin/pwd | sed 's/\/bin//'`
    APP_LOCATION=/opt/mycontroller
    
    SIZE=30M
    
    # This variable can be set to true if you prefer "rsync" than "cp". I use the command cp -u and rsync -X, so I don't copy the all folder every time for optimization.
    # You can choose which one you want. Be sure rsync is installed if you use it.
    
    USE_RSYNC=false
    
    # If there are some error with available RAM space, a system mail will be send
    # Change it to false, and you will have only log if there are no place on RAM anymore.
    
    MAIL=false
    
    HDD_LOG=$APP_LOCATION/conf-hdd.log
    RAM_LOG=$APP_LOCATION/conf
    
    
    LOG2RAM_LOG="${HDD_LOG}/db2ram.log"
    LOG_OUTPUT="tee -a $LOG2RAM_LOG"
    
    isSafe () {
        [ -d $HDD_LOG/ ] || echo "ERROR: $HDD_LOG/ doesn't exist!  Can't sync."
        [ -d $HDD_LOG/ ] || exit 1
    }
    
    syncToDisk () {
        isSafe
        if [ "$USE_RSYNC" = true ]; then
            rsync -aXWv --delete --exclude db2ram.log --links $RAM_LOG/ $HDD_LOG/ 2>&1 | $LOG_OUTPUT
        else
            cp -rfup $RAM_LOG/ -T $HDD_LOG/ 2>&1 | $LOG_OUTPUT
        fi
    }
    
    syncFromDisk () {
        isSafe
    
        if [ ! -z `du -sh -t $SIZE $HDD_LOG/ | cut -f1` ]; then
            echo "ERROR: RAM disk too small. Can't sync."
            umount -l $RAM_LOG/
            umount -l $HDD_LOG/
                    if [ "$MAIL" = true ]; then
                            echo "DB2RAM : No place on RAM anymore, fallback on the disk" | mail -s 'DB2Ram Error' root;
                    fi
            exit 1
        fi
    
        if [ "$USE_RSYNC" = true ]; then
            rsync -aXWv --delete --exclude db2ram.log --links $HDD_LOG/ $RAM_LOG/ 2>&1 | $LOG_OUTPUT
        else
            cp -rfup $HDD_LOG/ -T $RAM_LOG/ 2>&1 | $LOG_OUTPUT
        fi
    }
    
    wait_for () {
        while ! grep -qs $1 /proc/mounts; do
            sleep 0.1 
        done   
    }
    
    case "$1" in
      start)
          [ -d $HDD_LOG/ ] || mkdir $HDD_LOG/
          rm $LOG2RAM_LOG >/dev/null 2>&1
          mount --bind $RAM_LOG/ $HDD_LOG/
          mount --make-private $HDD_LOG/
          wait_for $HDD_LOG
          mount -t tmpfs -o nosuid,noexec,nodev,mode=0755,size=$SIZE mycontroller $RAM_LOG/
          wait_for $RAM_LOG
          syncFromDisk
          ;;
    
      stop)
          syncToDisk
          umount -l $RAM_LOG/
          umount -l $HDD_LOG/
          ;;
    
      write)
          syncToDisk
          ;;
              
      *)
          echo "Usage: db2ram {start|stop|write}" >&2
          exit 1
          ;;
    esac
    
    

    After creating the script, make it executable:

    chmod +x bin/db2ram
    

    (debian / armbian only)
    To sync the data base hourly from cron, add the following in /etc/cron.hourly

    Create a file called db2ram in /etc/cron.hourly/

    vi /etc/cron.hourly/db2ram
    

    add:

    #! /bin/sh
    
    /opt/mycontroller/bin/db2ram write > /dev/null
    

    set the correct permissions

    chmod +x /opt/cron.hourly/db2ram
    

    reboot the system...

    now verify if the database is located in ram with the mount command:

    root@orangepizero:/etc/cron.hourly# mount | grep mycontroller
    /dev/mmcblk0p1 on /opt/mycontroller/conf-hdd.log type ext4 (rw,noatime,nodiratime,errors=remount-ro,commit=600)
    mycontroller on /opt/mycontroller/conf type tmpfs (rw,nosuid,nodev,noexec,relatime,size=30720k,mode=755)
    

    How to sync the database from ram to disk
    to sync the database from ram to the SDcard, line can be added to the crontab with the command:

    crontab -e
    

    add this line:

    30 * * * * /opt/mycontroller/bin/db2ram write
    

    This will sync the ramdisk every 30 minutes

    Dont forget to make regular database backups (maybe even every hour), so in case of a system crash for what ever reason a minimal amount of data is lost.......

    TIP:
    Also install log2ram this tool that was the base of db2ram will also limit the writes to the SD card


  • ADMIN

    @tag Sorry to hear this. I guess we may move database to tmpfs or USB disk.


  • MOD

    No worries, 🙂

    I guess that i have been unlucky with the sdcard, because my previous system worked for over a year on another sdcard running mycontroller (on an RPI). My OrangePI zero ran for over 10 months on the card (24/7).

    When started investigating the writes to the sdcard, I tought the less writes the better.

    Currently investigating a modification of the start and stop scripts to automagically move the db to tmpfs.
    USB can be an option, but it needs to be a real harddrive since USB sticks also have a finite number of write cycles.

    Think tmpfs is the easiest solution, it is very efficient, and the DB is not that big, so it should easily fit in memory (makes it even faster ;))...

    WIll keep you posted! 🙂


  • ADMIN

    @tag Great! Kindly post your solution. Will help for other too 🙂


  • MOD

    Progress!

    Did a lot of research, and a great solution exists called log2ram. I used this as the base for the db2ram script.

    !! Caution since ram is volatile, in case of a power outage, all data will be lost!! so make sure to schedule regular backups !!

    add this to the bin/start.sh script, and set the correct directory where mycontroller is located, in this case /opt/mycontroller

    # Set the vatiable below to 1 if we want the database on a ramdisk
    # Usefull if running mycontroller from an SDcard, since putting the database
    # in ram will limit the number of writes to the SD card.
    # Since a ramdisk is volatile, make sure to setup a backup policy that runs every hour
    # The script used is based on log2ram.
    
    DB2RAM=1
    
    # provide the path to the maind= directory of the mycontroller software here
    APP_LOCATION=/opt/mycontroller/
    
    if [ "$DB2RAM" = 1 ]; then
       echo "Database will be in ram..."
       $APP_LOCATION/bin/db2ram start
    fi
    
    

    add this to the bin/stop.sh script

      /bin/mount | grep tmpfs | grep mycontroller > /dev/null 2>&1
      if [ $? -eq 0 ]; then
       echo "Database in ram, syncing to disk..."
       ./db2ram stop
      fi
    

    A new file should be created called bin/db2ram and copy all below into the file.
    (set the size of the disk to the required size, im my case 30mb is enough)

    #!/bin/sh
    # Configuration file for db2ram, this is based on log2ram (https://github.com/azlux/log2ram) under MIT license.
    # This configuration file is read by the db2ram service
    # Size for the ram folder, it's define the size the log folder will reserve into the RAM. 
    # If it's not enough, log2ram will not be able to use ram. Check you /var/log size folder. 
    # The default is 40M and is basically enough for a lot of application. You will need to increase it 
    # if you have a server and a lot of log for example.
    
    #APP_LOCATION=`/bin/pwd | sed 's/\/bin//'`
    APP_LOCATION=/opt/mycontroller
    
    SIZE=30M
    
    # This variable can be set to true if you prefer "rsync" than "cp". I use the command cp -u and rsync -X, so I don't copy the all folder every time for optimization.
    # You can choose which one you want. Be sure rsync is installed if you use it.
    
    USE_RSYNC=false
    
    # If there are some error with available RAM space, a system mail will be send
    # Change it to false, and you will have only log if there are no place on RAM anymore.
    
    MAIL=false
    
    HDD_LOG=$APP_LOCATION/conf-hdd.log
    RAM_LOG=$APP_LOCATION/conf
    
    
    LOG2RAM_LOG="${HDD_LOG}/db2ram.log"
    LOG_OUTPUT="tee -a $LOG2RAM_LOG"
    
    isSafe () {
        [ -d $HDD_LOG/ ] || echo "ERROR: $HDD_LOG/ doesn't exist!  Can't sync."
        [ -d $HDD_LOG/ ] || exit 1
    }
    
    syncToDisk () {
        isSafe
        if [ "$USE_RSYNC" = true ]; then
            rsync -aXWv --delete --exclude db2ram.log --links $RAM_LOG/ $HDD_LOG/ 2>&1 | $LOG_OUTPUT
        else
            cp -rfup $RAM_LOG/ -T $HDD_LOG/ 2>&1 | $LOG_OUTPUT
        fi
    }
    
    syncFromDisk () {
        isSafe
    
        if [ ! -z `du -sh -t $SIZE $HDD_LOG/ | cut -f1` ]; then
            echo "ERROR: RAM disk too small. Can't sync."
            umount -l $RAM_LOG/
            umount -l $HDD_LOG/
                    if [ "$MAIL" = true ]; then
                            echo "DB2RAM : No place on RAM anymore, fallback on the disk" | mail -s 'DB2Ram Error' root;
                    fi
            exit 1
        fi
    
        if [ "$USE_RSYNC" = true ]; then
            rsync -aXWv --delete --exclude db2ram.log --links $HDD_LOG/ $RAM_LOG/ 2>&1 | $LOG_OUTPUT
        else
            cp -rfup $HDD_LOG/ -T $RAM_LOG/ 2>&1 | $LOG_OUTPUT
        fi
    }
    
    wait_for () {
        while ! grep -qs $1 /proc/mounts; do
            sleep 0.1 
        done   
    }
    
    case "$1" in
      start)
          [ -d $HDD_LOG/ ] || mkdir $HDD_LOG/
          rm $LOG2RAM_LOG >/dev/null 2>&1
          mount --bind $RAM_LOG/ $HDD_LOG/
          mount --make-private $HDD_LOG/
          wait_for $HDD_LOG
          mount -t tmpfs -o nosuid,noexec,nodev,mode=0755,size=$SIZE mycontroller $RAM_LOG/
          wait_for $RAM_LOG
          syncFromDisk
          ;;
    
      stop)
          syncToDisk
          umount -l $RAM_LOG/
          umount -l $HDD_LOG/
          ;;
    
      write)
          syncToDisk
          ;;
              
      *)
          echo "Usage: db2ram {start|stop|write}" >&2
          exit 1
          ;;
    esac
    
    

    After creating the script, make it executable:

    chmod +x bin/db2ram
    

    (debian / armbian only)
    To sync the data base hourly from cron, add the following in /etc/cron.hourly

    Create a file called db2ram in /etc/cron.hourly/

    vi /etc/cron.hourly/db2ram
    

    add:

    #! /bin/sh
    
    /opt/mycontroller/bin/db2ram write > /dev/null
    

    set the correct permissions

    chmod +x /opt/cron.hourly/db2ram
    

    reboot the system...

    now verify if the database is located in ram with the mount command:

    root@orangepizero:/etc/cron.hourly# mount | grep mycontroller
    /dev/mmcblk0p1 on /opt/mycontroller/conf-hdd.log type ext4 (rw,noatime,nodiratime,errors=remount-ro,commit=600)
    mycontroller on /opt/mycontroller/conf type tmpfs (rw,nosuid,nodev,noexec,relatime,size=30720k,mode=755)
    

    How to sync the database from ram to disk
    to sync the database from ram to the SDcard, line can be added to the crontab with the command:

    crontab -e
    

    add this line:

    30 * * * * /opt/mycontroller/bin/db2ram write
    

    This will sync the ramdisk every 30 minutes
    Dont forget to make regular database backups (maybe even every hour), so in case of a system crash for what ever reason a minimal amount of data is lost.......


  • ADMIN

    @tag This is an awesome information. Thank you for your hard work !!


  • MOD

    @jkandasa

    Thx!, 🙂
    Will polish up the files and attach them to this post.



  • Hi guys,

    I am back now and just trying to pick up where I left off.

    There was similar issue on another system and moving to btrfs and having the system boot from sd but load the system on ysb stick works great (with added advantage that 'live' full backups can be automated via cron. Maybe something to look into?

    What about using a pi hdd? They are not so expensive and could help a lot with this too......

    Just a thought! 🙂


  • MOD

    @skywatch
    Hi!,

    Great tips!, and there are many solutions out there 🙂
    Currently running the database in memory, it works very well, however there is always the topic of dataloss, therefore I schedule regular backups (every 6 hours). It is a "proof of concept".

    Will definitely look into Btrfs, it seems that it is "SSD" aware, so this might be a solution, need to figure out how my OPI-0 is able to boot from it.

    Thanks for the tips!! thinking along is always appreciated 😉



  • @Tag.

    I just rememeber that Xbian had similar SD card issues. They moved to btrfs and things suddenly became much better. and running live system backups with cron overnight really makes it for me. So simple to restore from server if needed...

    This week I will look into booting pi3 from usb instead of SD card. Will be fun for sure! 😉


  • MOD

    @skywatch

    Great!, 👍 but keep in mind that an usb stick also has a limited number of writes per physical bit, maybe more compared to and SD card, but it is still not an harddisk/SSD



  • Did this today, but now waiting for WD pidrive to come back into stock......

    FWIW I still get crashes 😞

    Will post about that in troubleshooting.....



  • Any news about this topic?
    Just a remark: Dietpi has an inbuilt option to make the filesystem only readable.



  • Hi @tag!

    Is it possible to move the RAM also to the InfluxDB database?


  • MOD

    @wanvo

    Hi, I assume you mean putting the influx database in ram?.
    yes that should be possible since this is just a ram disk.... but take precautions to powerfailures... power-off = data gone.....


 

8
Online

2.3k
Users

303
Topics

1.8k
Posts

Looks like your connection to MYCONTROLLER.ORG was lost, please wait while we try to reconnect.