Best way to add storage to VPS

I started using my VPS as a node on my Syncthing network.

I'm thinking of signing up for Backblaze B2 and mount it using rclone. I have no experience with either one, but it's fun to learn.

More than anything, I'm curious to hear how other people approach this problem. I've seen mentions of GDrive, but I'm trying to stay away from Google/Amazon and the like.

It's pronounced hacker.

Comments

  • Juicefs

    Thanked by (1)jqr
  • edited November 2023

    It all depends on the performance I need

    I regularly use heztner storage boxes and mount them via sshfs.
    I also use object storage a lot for my backups, but using the S3 API. (Scaleway object storage)
    I've never tried mounting as a volume, but it seems feasible.

    https://github.com/s3fs-fuse/s3fs-fuse

    Thanked by (3)skorous jqr nick_
  • @remy said: It all depends on the performance I need

    This is a good comment.

    Sometimes I use NFS, sometimes ISCSI+bcache.

    Thanked by (1)jqr
  • I've had good experience with sshfs for mounting remote storage.

    Thanked by (1)jqr

    Recommended hosts:
    Letbox, Data ideas, Hetzner

  • @flo82 said:
    Juicefs

    I did not know about this one. I'll look into it. Any particular reason you like it?

    @remy said:
    It all depends on the performance I need

    I'm not worried about performance at all. This is basically a fallback for when I'm not at home, or can't use my VPN for some reason (like at school, where they block Wireguard why?!). The rest of the time I have direct access to my home server and can pull from that.

    Ironically, I had thought about SSHFS, but in my experience (a long time ago) it was unreliable, which is something I do care about.

    @tetech said:

    Sometimes I use NFS, sometimes ISCSI+bcache.

    I use NFS at home, but for remote access I would need to add the server to my VPN or something of the sort, no? I'm trying to avoid that because it would mean opening my home server to an untrusted machine.

    @james50a said:
    I've had good experience with sshfs for mounting remote storage.

    That's another vote. I'll definitely look into it again.

    Thank you all for your replies.

    It's pronounced hacker.

  • Interesting aside: I never realized how quickly stuff gets indexed by search engines nowadays. I was searching on DuckDuckGo (which I believe used Bing) for "add remote storage to vps", and this thread is already one of the results. 🤯

    It's pronounced hacker.

  • @jqr said:
    Interesting aside: I never realized how quickly stuff gets indexed by search engines nowadays. I was searching on DuckDuckGo (which I believe used Bing) for "add remote storage to vps", and this thread is already one of the results. 🤯

  • @jqr said: I use NFS at home, but for remote access I would need to add the server to my VPN or something of the sort, no? I'm trying to avoid that because it would mean opening my home server to an untrusted machine.

    I've got two scenarios for this. For a home server, I set up a Wireguard connection from the home server to the VPS, so no ports being opened on the home server, and iptables locks everything down, i.e. the Wireguard port is only accessible from the home IP, and the NFS port is opened on the Wireguard interface. Otherwise everything is dropped in both directions.

    The other scenario is for a "virtual cloud" (sharing among VPS'es), in this case I use tinc.

    Thanked by (3)jqr FrankZ skorous
  • nfs mount, rclone mount, minio, if no mounting needed: syncthing send only (host own relay if public one is too slow)

    Thanked by (1)jqr

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • cybertechcybertech OGBenchmark King

    i did sshfs too, but a conventional storage vps (even better with raid10) worked best for me

    Thanked by (1)jqr

    I bench YABS 24/7/365 unless it's a leap year.

  • @Encoders said:
    if no mounting needed: syncthing send only (host own relay if public one is too slow)

    Ohhh... this might work as well. Use the VPS for the transfer, not the storage. Only downside is that I liked the idea of the VPS having an offsite copy of the files, but I really should focus on having proper backups instead. Thanks!

    @cybertech said:
    i did sshfs too, but a conventional storage vps (even better with raid10) worked best for me

    Yeah, my BF plans is to get a new storage VPS. Thank you.

    It's pronounced hacker.

  • Best method: buy a VPS with large SSD.
    Second method: buy a hybrid storage VPS with SSD for software and HDD for storage, HostBrr and BuyVM have these.
    Worst method: all kinds of NFS or Rclone mounts, you never know when a glitch causes data loss.

    Thanked by (4)jqr TheDP nick_ fluttershy
  • @yoursunny said:
    Second method: buy a hybrid storage VPS with SSD for software and HDD for storage, HostBrr and BuyVM have these.

    I've actually been eyeing HostBrr for a bit, just waiting for a good deal (1TB/$30 w/IPv4).

    Thanks for replying.

    It's pronounced hacker.

  • edited November 2023

    @remy said:
    It all depends on the performance I need

    I'm not worried about performance at all. This is basically a fallback for when I'm not at home, or can't use my VPN for some reason (like at school, where they block Wireguard why?!). The rest of the time I have direct access to my home server and can pull from that.

    Ironically, I had thought about SSHFS, but in my experience (a long time ago) it was unreliable, which is something I do care about.

    It's not :)
    You can have micro-cuts in the connection between your servers and if you don't give the right arguments when mounting the volume with sshfs then indeed it can give the impression of being unreliable. Because it will never re-mount the volume at the slightest network problem.
    Take a look at the options:

    -o reconnect,ServerAliveInterval=10,ServerAliveCountMax=100

    There is of course an overhead compared to NFS, as traffic is encrypted.

    Thanked by (1)jqr
  • @jqr said:

    @flo82 said:
    Juicefs

    I did not know about this one. I'll look into it. Any particular reason you like it?

    Fast, reliable and most important feature: POSIX compliant.
    Try it for yourself. Rclone is fine, but i like juicefs more.

    Thanked by (1)jqr
  • @remy said:
    You can have micro-cuts in the connection between your servers and if you don't give the right arguments when mounting the volume with sshfs then indeed it can give the impression of being unreliable. Because it will never re-mount the volume at the slightest network problem.

    Maybe that's what I ran into back then.

    Take a look at the options:

    -o reconnect,ServerAliveInterval=10,ServerAliveCountMax=100

    Ah, I recently had to deal with those to get an autossh to work reliably.

    There is of course an overhead compared to NFS, as traffic is encrypted.

    Makes sense. And I see that as a bonus. Thanks again for the info. I appreciate it.

    @flo82 said:
    Fast, reliable and most important feature: POSIX compliant.
    Try it for yourself. Rclone is fine, but i like juicefs more.

    Nice. Will do. Thank you!

    It's pronounced hacker.

  • @flo82 said:
    Juicefs

    Looks like an interesting solution, first time I hear of them. Are you using it?

    Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side

  • bjobjo OG
    edited November 2023

    At the moment: Hetzner Storage Box via SSHFS in a CCX13. Thinking about switching to a php-friends box on BF, would need some storage for Nextcloud. php-friends could give 500 GB NFS/SMB storage, an alternative would be e2 via s3fs or goofys. The latter is faster, but unfortunately uses mtime as ctime and atime.

    Thanked by (1)jqr
  • @vitobotta said:

    @flo82 said:
    Juicefs

    Looks like an interesting solution, first time I hear of them. Are you using it?

    Yes. Using it together with syncthing. Works like a charm

  • bjobjo OG
    edited November 2023

    Thanks for the hint on juicefs. I used gocryptfs on a storagebox with sshfs before, which made a lot of overhead.
    @flo82 Any experience which is best for metadata?

  • @bjo said:
    Thanks for the hint on juicefs. I used gocryptfs on a storagebox with sshfs before, which made a lot of overhead.
    @flo82 Any experience which is best for metadata?

    Juicefs saves metadata for files in a separate DB. So the binaries are uploaded as chunks to a different backend. This is why juicefs is blazing fast. Furthermore juicefs allows caching of chunks - which makes it as fast as your local drive is. Encryption can be configured
    on client side or you can you use server side encryption if the backend supportes this.

    E.g. You can use a (distributed) redis server for backing metadata and s3 with server-side encryption for binary chunks. This is what i'm doing. Please be aware: if you loose metadata db, you'll loose your data. So backup metadata db frequently.

    Thanked by (2)bjo jqr
  • In my experience "rclone mount" works better than sshfs, even for ssh itself. Enabled local caching and can watch 1080p video files from the mount point, with a 80ms ping. Or if you have a low ping (<10ms), CIFS or NFS will work best.

    Thanked by (2)FrankZ rogertheshrubb3r
  • @rm_ said:
    In my experience "rclone mount" works better than sshfs, even for ssh itself. Enabled local caching and can watch 1080p video files from the mount point, with a 80ms ping. Or if you have a low ping (<10ms), CIFS or NFS will work best.

    for big files this works fine. you will run into problems if you have many small files. So it depends on the use case.

    btw - here is a performance benchmark on juicefs: https://juicefs.com/docs/community/benchmark/
    i'm not related to the company of juicefs - i just like it very much :-)

    Thanked by (1)bjo
  • @flo82 said:
    Juicefs saves metadata for files in a separate DB. So the binaries are uploaded as chunks to a different backend. This is why juicefs is blazing fast. Furthermore juicefs allows caching of chunks - which makes it as fast as your local drive is. Encryption can be configured
    on client side or you can you use server side encryption if the backend supportes this.

    E.g. You can use a (distributed) redis server for backing metadata and s3 with server-side encryption for binary chunks. This is what i'm doing. Please be aware: if you loose metadata db, you'll loose your data. So backup metadata db frequently.

    Yeah, I read it and thought I should add redis to my backups. PostgreSQL seems too slow on my box. Fortunately since 1.1.0 the client backups metadata also to the repo every hour.

Sign In or Register to comment.