
host_c
host_c
About
- Username
- host_c
- Joined
- Visits
- 2,052
- Last Active
- Roles
- Member, Hosting Provider
- Thanked
- 3487
Comments
-
(Quote) (Image)
-
NFS, iSCSI, iSCSI over Fiber Channel are storage transport protocols, they "share" the raid type you have on the server/storage. I included them in the comparison above just to have an idea on performance/cost factors.
-
@HostMayo as a side-note, since we dived into storage. Here is a comparison between storage technologies and general use case scenario with general strengths and weaknesses: Feature NFS iSCSI iSCSI over Fiber Channel …
-
@vish nice, you beat me to it.. (Image)
-
(Quote) I think that Palo Alto made an appliance specially for him by now, so that is air tight protected =)
-
(Quote) Damn I missed you, almost called 911, problem is that my roaming is suspended and 112 did not know who @yoursunny is. =)
-
(Quote) since this is a LXC , you should have at least access to the VPS via VNC right? In that case, set up a 1 / hour backup, and just restore last ours backup. Or 1 backup of the LXC / day and keep 7 days. As a router it will probably use ~2GB to…
-
(Quote) I would personally do option A, as it gives me a better control, plus I do not have to do routing and nat in debian, as I prefer a BSD to do that, either that you go with a GUY one or CLI. This will allow me to set up my internal "LAN&…
-
(Quote) (Image)
-
Never have I understood the need for such things, I get human behavior and fun and prank and all that, but as it seems, I get "amazed" each day. EDIT: Confirm OGF is down, not even cloud-flare helped them. (Image)
-
(Quote) I do not wish to open a can of worms here, You are on to something, I see it differently, so here it is: ( control of information ) Guys ( in general ), you think that all those chat apps/video apps are not monitored/scanned/screened, sin…
-
(Quote) You should stop posting anything about your NET, each time you do, most of your services go under. It's no secret that you have a few people who really, really don’t like you. While having so-called "haters" might seem amusing, a…
-
@xvps for the moment, we use self hosted MAIL-COW, sincerely, I have no clue what it does and how, but for a free product, nothing to complain, we had a lot of other options before, some I remember: ZIMBRA (freaking awesome as mail server) + filte…
-
THX @davide I was not up to date on MDADM.
-
(Quote) yes, that is true, to some extent, this you will have to google around, I do not know if MDADM maps the drives by UUID or /dev/sdxxx, as on a new motherboards, drives can change naming from sdaxxx to daxxx or other. in that case you have to …
-
(Quote) (Image)
-
(Quote) My professional opinion, go with HW raid as much as possible. - but who am I, it is not like we mostly sell storage services =) This is why: ( raid 60 of 24 drives on a node that has running customers, live migrations to it): (Spoiler) Th…
-
(Quote) For Boot, I would use 1 NVME, as the OS does not do heavy Read / Write to it, so Wear level will not be an issue Definetly RAID10 on the 4 drives - ZFS if you have the RAM for it ( 88 GB ) if not, MDADM Use the aditional NVME for fast boot t…
-
(Quote) MDADM is as old as the linux kernel, it is mature, and stable, hence it is trusted. ZFS is like the new kid on the block, pretty young, so not that trust-worthy for the moment. If you do a storage server for serving shares over NFS,CIFS,ISC…
-
(Quote) do that, send me the data while you have 30 customers that have the add-on drive on the 100TB pool that basically shits itself IO wise, like barely hitting a few MB. - we did this and boy it went to shit, and we tried it a bunch of times in…
-
(Quote) Because no one wants to BURN RAM for Storage Plus the fact that in high IO, it kinda sucks ( above 50-60K IO/sec ). It is excellent for data integrity on large arrays if you hit it with a ton of ram. It's purpose was * assure data integr…
-
Whatever you do, ok, just don't do raid 5, it is not recommended in PROD for almost 3 decades now. ( since larger drives have shown up ) For boot, as I said before, do mirror on what you have ( the 2 drives ) NVME - if consumer via PCI-EX, aaaaaa,…
-
@davide ZFS vs ext3 you had, i will just say this, you will not loose a bit of data, ever with ZFS. good choice for SW RAID.
-
Guys, 2 drive mirrors will suck especially on SW RAID. Data has to be multiplied and copied to each drive + write ACK, it will suck the same on HW RAID also if that makes you feel all fuzzy inside =) . Performance with RAID is with stripped mirro…
-
(Quote) I am getting something 1U in Physical Form, so others can enjoy it too. :p