• Categories
    • Recent
    • Tags
    • Popular
    • Register
    • Login

    Howto make Mycontroller sd-card friendly? (Highly experimental!!)

    Scheduled Pinned Locked Moved Developers Zone
    15 Posts 5 Posters 3.9k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T Offline
      Tag MOD
      last edited by Tag

      Hi!,

      I am running mycontroller on an OrangePI zero, this systems main storage is an SD card.
      Last week my sd card failed, and i had to setup everything again (Mysgw/mycontroller) Then i was wondering why the SD card failed, one of the issues is the number of writes to the card by linux and mycontroller.

      Since Mycontroller is doing a number of writes to the SD card, would it be possible somehow to limit the number of writes to the SD card. (using the standard setup with the standard database)

      The benefit here is that the SD card will live longer....
      Also RPI users will greatly benefit from this...

      HOWTO:

      !! Caution since ram is volatile, in case of a power outage, all data will be lost!! so make sure to schedule regular backups !!

      add this to the bin/start.sh script, and set the correct directory where mycontroller is located, in this case /opt/mycontroller

      # Set the vatiable below to 1 if we want the database on a ramdisk
      # Usefull if running mycontroller from an SDcard, since putting the database
      # in ram will limit the number of writes to the SD card.
      # Since a ramdisk is volatile, make sure to setup a backup policy that runs every hour
      # The script used is based on log2ram.
      
      DB2RAM=1
      
      # provide the path to the maind= directory of the mycontroller software here
      APP_LOCATION=/opt/mycontroller/
      
      if [ "$DB2RAM" = 1 ]; then
         echo "Database will be in ram..."
         $APP_LOCATION/bin/db2ram start
      fi
      
      

      add this to the bin/stop.sh script

        /bin/mount | grep tmpfs | grep mycontroller > /dev/null 2>&1
        if [ $? -eq 0 ]; then
         echo "Database in ram, syncing to disk..."
         ./db2ram stop
        fi
      

      A new file should be created called bin/db2ram and copy all below into the file.
      (set the size of the disk to the required size, im my case 30mb is enough)

      #!/bin/sh
      # Configuration file for db2ram, this is based on log2ram (https://github.com/azlux/log2ram) under MIT license.
      # This configuration file is read by the db2ram service
      # Size for the ram folder, it's define the size the log folder will reserve into the RAM. 
      # If it's not enough, log2ram will not be able to use ram. Check you /var/log size folder. 
      # The default is 40M and is basically enough for a lot of application. You will need to increase it 
      # if you have a server and a lot of log for example.
      
      #APP_LOCATION=`/bin/pwd | sed 's/\/bin//'`
      APP_LOCATION=/opt/mycontroller
      
      SIZE=30M
      
      # This variable can be set to true if you prefer "rsync" than "cp". I use the command cp -u and rsync -X, so I don't copy the all folder every time for optimization.
      # You can choose which one you want. Be sure rsync is installed if you use it.
      
      USE_RSYNC=false
      
      # If there are some error with available RAM space, a system mail will be send
      # Change it to false, and you will have only log if there are no place on RAM anymore.
      
      MAIL=false
      
      HDD_LOG=$APP_LOCATION/conf-hdd.log
      RAM_LOG=$APP_LOCATION/conf
      
      
      LOG2RAM_LOG="${HDD_LOG}/db2ram.log"
      LOG_OUTPUT="tee -a $LOG2RAM_LOG"
      
      isSafe () {
          [ -d $HDD_LOG/ ] || echo "ERROR: $HDD_LOG/ doesn't exist!  Can't sync."
          [ -d $HDD_LOG/ ] || exit 1
      }
      
      syncToDisk () {
          isSafe
          if [ "$USE_RSYNC" = true ]; then
              rsync -aXWv --delete --exclude db2ram.log --links $RAM_LOG/ $HDD_LOG/ 2>&1 | $LOG_OUTPUT
          else
              cp -rfup $RAM_LOG/ -T $HDD_LOG/ 2>&1 | $LOG_OUTPUT
          fi
      }
      
      syncFromDisk () {
          isSafe
      
          if [ ! -z `du -sh -t $SIZE $HDD_LOG/ | cut -f1` ]; then
              echo "ERROR: RAM disk too small. Can't sync."
              umount -l $RAM_LOG/
              umount -l $HDD_LOG/
                      if [ "$MAIL" = true ]; then
                              echo "DB2RAM : No place on RAM anymore, fallback on the disk" | mail -s 'DB2Ram Error' root;
                      fi
              exit 1
          fi
      
          if [ "$USE_RSYNC" = true ]; then
              rsync -aXWv --delete --exclude db2ram.log --links $HDD_LOG/ $RAM_LOG/ 2>&1 | $LOG_OUTPUT
          else
              cp -rfup $HDD_LOG/ -T $RAM_LOG/ 2>&1 | $LOG_OUTPUT
          fi
      }
      
      wait_for () {
          while ! grep -qs $1 /proc/mounts; do
              sleep 0.1 
          done   
      }
      
      case "$1" in
        start)
            [ -d $HDD_LOG/ ] || mkdir $HDD_LOG/
            rm $LOG2RAM_LOG >/dev/null 2>&1
            mount --bind $RAM_LOG/ $HDD_LOG/
            mount --make-private $HDD_LOG/
            wait_for $HDD_LOG
            mount -t tmpfs -o nosuid,noexec,nodev,mode=0755,size=$SIZE mycontroller $RAM_LOG/
            wait_for $RAM_LOG
            syncFromDisk
            ;;
      
        stop)
            syncToDisk
            umount -l $RAM_LOG/
            umount -l $HDD_LOG/
            ;;
      
        write)
            syncToDisk
            ;;
                
        *)
            echo "Usage: db2ram {start|stop|write}" >&2
            exit 1
            ;;
      esac
      
      

      After creating the script, make it executable:

      chmod +x bin/db2ram
      

      (debian / armbian only)
      To sync the data base hourly from cron, add the following in /etc/cron.hourly

      Create a file called db2ram in /etc/cron.hourly/

      vi /etc/cron.hourly/db2ram
      

      add:

      #! /bin/sh
      
      /opt/mycontroller/bin/db2ram write > /dev/null
      

      set the correct permissions

      chmod +x /opt/cron.hourly/db2ram
      

      reboot the system...

      now verify if the database is located in ram with the mount command:

      root@orangepizero:/etc/cron.hourly# mount | grep mycontroller
      /dev/mmcblk0p1 on /opt/mycontroller/conf-hdd.log type ext4 (rw,noatime,nodiratime,errors=remount-ro,commit=600)
      mycontroller on /opt/mycontroller/conf type tmpfs (rw,nosuid,nodev,noexec,relatime,size=30720k,mode=755)
      

      How to sync the database from ram to disk
      to sync the database from ram to the SDcard, line can be added to the crontab with the command:

      crontab -e
      

      add this line:

      30 * * * * /opt/mycontroller/bin/db2ram write
      

      This will sync the ramdisk every 30 minutes

      Dont forget to make regular database backups (maybe even every hour), so in case of a system crash for what ever reason a minimal amount of data is lost.......

      TIP:
      Also install log2ram this tool that was the base of db2ram will also limit the writes to the SD card

      jkandasaJ 1 Reply Last reply Reply Quote 0
      • jkandasaJ Offline
        jkandasa @Tag
        last edited by

        @tag Sorry to hear this. I guess we may move database to tmpfs or USB disk.

        1 Reply Last reply Reply Quote 0
        • T Offline
          Tag MOD
          last edited by Tag

          No worries, 🙂

          I guess that i have been unlucky with the sdcard, because my previous system worked for over a year on another sdcard running mycontroller (on an RPI). My OrangePI zero ran for over 10 months on the card (24/7).

          When started investigating the writes to the sdcard, I tought the less writes the better.

          Currently investigating a modification of the start and stop scripts to automagically move the db to tmpfs.
          USB can be an option, but it needs to be a real harddrive since USB sticks also have a finite number of write cycles.

          Think tmpfs is the easiest solution, it is very efficient, and the DB is not that big, so it should easily fit in memory (makes it even faster ;))...

          WIll keep you posted! 🙂

          jkandasaJ 1 Reply Last reply Reply Quote 1
          • jkandasaJ Offline
            jkandasa @Tag
            last edited by

            @tag Great! Kindly post your solution. Will help for other too 🙂

            T 1 Reply Last reply Reply Quote 0
            • T Offline
              Tag MOD @jkandasa
              last edited by Tag

              Progress!

              Did a lot of research, and a great solution exists called log2ram. I used this as the base for the db2ram script.

              !! Caution since ram is volatile, in case of a power outage, all data will be lost!! so make sure to schedule regular backups !!

              add this to the bin/start.sh script, and set the correct directory where mycontroller is located, in this case /opt/mycontroller

              # Set the vatiable below to 1 if we want the database on a ramdisk
              # Usefull if running mycontroller from an SDcard, since putting the database
              # in ram will limit the number of writes to the SD card.
              # Since a ramdisk is volatile, make sure to setup a backup policy that runs every hour
              # The script used is based on log2ram.
              
              DB2RAM=1
              
              # provide the path to the maind= directory of the mycontroller software here
              APP_LOCATION=/opt/mycontroller/
              
              if [ "$DB2RAM" = 1 ]; then
                 echo "Database will be in ram..."
                 $APP_LOCATION/bin/db2ram start
              fi
              
              

              add this to the bin/stop.sh script

                /bin/mount | grep tmpfs | grep mycontroller > /dev/null 2>&1
                if [ $? -eq 0 ]; then
                 echo "Database in ram, syncing to disk..."
                 ./db2ram stop
                fi
              

              A new file should be created called bin/db2ram and copy all below into the file.
              (set the size of the disk to the required size, im my case 30mb is enough)

              #!/bin/sh
              # Configuration file for db2ram, this is based on log2ram (https://github.com/azlux/log2ram) under MIT license.
              # This configuration file is read by the db2ram service
              # Size for the ram folder, it's define the size the log folder will reserve into the RAM. 
              # If it's not enough, log2ram will not be able to use ram. Check you /var/log size folder. 
              # The default is 40M and is basically enough for a lot of application. You will need to increase it 
              # if you have a server and a lot of log for example.
              
              #APP_LOCATION=`/bin/pwd | sed 's/\/bin//'`
              APP_LOCATION=/opt/mycontroller
              
              SIZE=30M
              
              # This variable can be set to true if you prefer "rsync" than "cp". I use the command cp -u and rsync -X, so I don't copy the all folder every time for optimization.
              # You can choose which one you want. Be sure rsync is installed if you use it.
              
              USE_RSYNC=false
              
              # If there are some error with available RAM space, a system mail will be send
              # Change it to false, and you will have only log if there are no place on RAM anymore.
              
              MAIL=false
              
              HDD_LOG=$APP_LOCATION/conf-hdd.log
              RAM_LOG=$APP_LOCATION/conf
              
              
              LOG2RAM_LOG="${HDD_LOG}/db2ram.log"
              LOG_OUTPUT="tee -a $LOG2RAM_LOG"
              
              isSafe () {
                  [ -d $HDD_LOG/ ] || echo "ERROR: $HDD_LOG/ doesn't exist!  Can't sync."
                  [ -d $HDD_LOG/ ] || exit 1
              }
              
              syncToDisk () {
                  isSafe
                  if [ "$USE_RSYNC" = true ]; then
                      rsync -aXWv --delete --exclude db2ram.log --links $RAM_LOG/ $HDD_LOG/ 2>&1 | $LOG_OUTPUT
                  else
                      cp -rfup $RAM_LOG/ -T $HDD_LOG/ 2>&1 | $LOG_OUTPUT
                  fi
              }
              
              syncFromDisk () {
                  isSafe
              
                  if [ ! -z `du -sh -t $SIZE $HDD_LOG/ | cut -f1` ]; then
                      echo "ERROR: RAM disk too small. Can't sync."
                      umount -l $RAM_LOG/
                      umount -l $HDD_LOG/
                              if [ "$MAIL" = true ]; then
                                      echo "DB2RAM : No place on RAM anymore, fallback on the disk" | mail -s 'DB2Ram Error' root;
                              fi
                      exit 1
                  fi
              
                  if [ "$USE_RSYNC" = true ]; then
                      rsync -aXWv --delete --exclude db2ram.log --links $HDD_LOG/ $RAM_LOG/ 2>&1 | $LOG_OUTPUT
                  else
                      cp -rfup $HDD_LOG/ -T $RAM_LOG/ 2>&1 | $LOG_OUTPUT
                  fi
              }
              
              wait_for () {
                  while ! grep -qs $1 /proc/mounts; do
                      sleep 0.1 
                  done   
              }
              
              case "$1" in
                start)
                    [ -d $HDD_LOG/ ] || mkdir $HDD_LOG/
                    rm $LOG2RAM_LOG >/dev/null 2>&1
                    mount --bind $RAM_LOG/ $HDD_LOG/
                    mount --make-private $HDD_LOG/
                    wait_for $HDD_LOG
                    mount -t tmpfs -o nosuid,noexec,nodev,mode=0755,size=$SIZE mycontroller $RAM_LOG/
                    wait_for $RAM_LOG
                    syncFromDisk
                    ;;
              
                stop)
                    syncToDisk
                    umount -l $RAM_LOG/
                    umount -l $HDD_LOG/
                    ;;
              
                write)
                    syncToDisk
                    ;;
                        
                *)
                    echo "Usage: db2ram {start|stop|write}" >&2
                    exit 1
                    ;;
              esac
              
              

              After creating the script, make it executable:

              chmod +x bin/db2ram
              

              (debian / armbian only)
              To sync the data base hourly from cron, add the following in /etc/cron.hourly

              Create a file called db2ram in /etc/cron.hourly/

              vi /etc/cron.hourly/db2ram
              

              add:

              #! /bin/sh
              
              /opt/mycontroller/bin/db2ram write > /dev/null
              

              set the correct permissions

              chmod +x /opt/cron.hourly/db2ram
              

              reboot the system...

              now verify if the database is located in ram with the mount command:

              root@orangepizero:/etc/cron.hourly# mount | grep mycontroller
              /dev/mmcblk0p1 on /opt/mycontroller/conf-hdd.log type ext4 (rw,noatime,nodiratime,errors=remount-ro,commit=600)
              mycontroller on /opt/mycontroller/conf type tmpfs (rw,nosuid,nodev,noexec,relatime,size=30720k,mode=755)
              

              How to sync the database from ram to disk
              to sync the database from ram to the SDcard, line can be added to the crontab with the command:

              crontab -e
              

              add this line:

              30 * * * * /opt/mycontroller/bin/db2ram write
              

              This will sync the ramdisk every 30 minutes
              Dont forget to make regular database backups (maybe even every hour), so in case of a system crash for what ever reason a minimal amount of data is lost.......

              jkandasaJ 1 Reply Last reply Reply Quote 1
              • jkandasaJ Offline
                jkandasa @Tag
                last edited by

                @tag This is an awesome information. Thank you for your hard work !!

                T 1 Reply Last reply Reply Quote 0
                • T Offline
                  Tag MOD @jkandasa
                  last edited by

                  @jkandasa

                  Thx!, 🙂
                  Will polish up the files and attach them to this post.

                  1 Reply Last reply Reply Quote 1
                  • skywatchS Offline
                    skywatch
                    last edited by

                    Hi guys,

                    I am back now and just trying to pick up where I left off.

                    There was similar issue on another system and moving to btrfs and having the system boot from sd but load the system on ysb stick works great (with added advantage that 'live' full backups can be automated via cron. Maybe something to look into?

                    What about using a pi hdd? They are not so expensive and could help a lot with this too......

                    Just a thought! 🙂

                    T 1 Reply Last reply Reply Quote 0
                    • T Offline
                      Tag MOD @skywatch
                      last edited by

                      @skywatch
                      Hi!,

                      Great tips!, and there are many solutions out there 🙂
                      Currently running the database in memory, it works very well, however there is always the topic of dataloss, therefore I schedule regular backups (every 6 hours). It is a "proof of concept".

                      Will definitely look into Btrfs, it seems that it is "SSD" aware, so this might be a solution, need to figure out how my OPI-0 is able to boot from it.

                      Thanks for the tips!! thinking along is always appreciated 😉

                      1 Reply Last reply Reply Quote 0
                      • skywatchS Offline
                        skywatch
                        last edited by

                        @Tag.

                        I just rememeber that Xbian had similar SD card issues. They moved to btrfs and things suddenly became much better. and running live system backups with cron overnight really makes it for me. So simple to restore from server if needed...

                        This week I will look into booting pi3 from usb instead of SD card. Will be fun for sure! 😉

                        T 1 Reply Last reply Reply Quote 0
                        • T Offline
                          Tag MOD @skywatch
                          last edited by

                          @skywatch

                          Great!, 👍 but keep in mind that an usb stick also has a limited number of writes per physical bit, maybe more compared to and SD card, but it is still not an harddisk/SSD

                          W 1 Reply Last reply Reply Quote 0
                          • skywatchS Offline
                            skywatch
                            last edited by

                            Did this today, but now waiting for WD pidrive to come back into stock......

                            FWIW I still get crashes 😞

                            Will post about that in troubleshooting.....

                            1 Reply Last reply Reply Quote 0
                            • B Offline
                              blacksheepinc
                              last edited by

                              Any news about this topic?
                              Just a remark: Dietpi has an inbuilt option to make the filesystem only readable.

                              1 Reply Last reply Reply Quote 0
                              • W Offline
                                wanvo @Tag
                                last edited by wanvo

                                Hi @tag!

                                Is it possible to move the RAM also to the InfluxDB database?

                                T 1 Reply Last reply Reply Quote 0
                                • T Offline
                                  Tag MOD @wanvo
                                  last edited by

                                  @wanvo

                                  Hi, I assume you mean putting the influx database in ram?.
                                  yes that should be possible since this is just a ram disk.... but take precautions to powerfailures... power-off = data gone.....

                                  1 Reply Last reply Reply Quote 1
                                  • First post
                                    Last post

                                  0

                                  Online

                                  587

                                  Users

                                  529

                                  Topics

                                  3.4k

                                  Posts
                                  Copyright © 2015-2025 MyController.org | Contributors | Localization