Been using metalvps as a build machine for the multiple VPSs running NixOS. Compiling software from source on this thing is fast!
It usually take my own PC 20 minutes to finish compiling Zerotier (NixOS doesn't keep its binary on their repo for licensing reasons) but just couple minutes on metalvps!
@drunekndog said:
Why not LXD? It's a lot more usable than plain LXC.
Why not LXC?
It's a lot more efficient than fancy LXD.
I don't buy this; it seems a lot like complaints about ipv6 using more resources than ipv4: it may exist in theory, but it's unlikely to make a big impact in practice.
An important technical limitation of LXD is the lack of isolation between host machine users authorized to create containers.
Access control for LXD is based on group membership.
The root user and all members of the lxd group can interact with the local daemon.
Anyone with access to the LXD socket can fully control LXD, which includes the ability to attach host devices and file systems.
In contrast, LXC doesn't have a local daemon.
Each user can launch their own unprivileged containers, without being able to access other users' containers.
@Not_Oles said:
Is it too crazy just to give each container its own static WAN IPv4 and IPv6/64 ?
I'd rather use NAT'ed IPv4, so that more containers can be accommodated.
auto lxcbr0
iface lxcbr0 inet static
bridge-ports eth1
Bridging containers directly on the physical port may result in the containers' MAC addresses becoming visible on the physical network.
I don't know about Cloudie, but doing this in KVM would get filtered in Virtualizor, and doing this on Hetzner would trigger infraction warning letter.
Since you are new here at LES we can say that your being new helps us celebrate the new in the New Year.
Do you want to share a little about who and where you are and what you want to do on the server?
Thanks!
Tom
Thank you, I wish you a happy new year, I am a student from China, I already have two vps, but I haven't used alpine yet, so I want to try it, I will run a web service on it or telegram bot or something
I am a university student. In fact, I am not learning computer related, but I am very interested in the computer. We learned C. Now I want to learn Python during the holidays
"A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)
Thanks for your helpful, relevant, and interesting comments!
@yoursunny said: I'd rather use NAT'ed IPv4, so that more containers can be accommodated.
May I please ask, how many containers do you think we should accommodate?
@yoursunny said: Bridging containers directly on the physical port may result in the containers' MAC addresses becoming visible on the physical network.
I don't know about Cloudie, but doing this in KVM would get filtered in Virtualizor, and doing this on Hetzner would trigger infraction warning letter.
What we previously did at Hetzner was assign VMs an IPv4/32. If I understand correctly, having the VMs use link layer prevented the issue of the VM MAC addresses becoming visible. A difference between what formerly was at Hetzner and what is happening now at Cloudie was that, at Hetzner, the extra IPs were out of band relative to the server's main IPv4. The /etc/network/interfaces file formerly in use at Hetzner looked something like that shown below.
May I please ask, assuming for the purpose of the question that we stick to assigning VMs individual IPv4s and do not do NAT, would assigning IPv4/32s to the VMs mitigate the MAC address leak without having the extra IPs out of band with respect to the main IP? If no, could adding an additional main IPv4 and gateway from Cloudie plus using link layer solve the MAC address leak problem as it apparently did at Hetzner?
May I please ask what do others think about whether we should do NAT or assign individual IPs? And, how many containers do others think we should have?
root@fsn1 ~ # cat /etc/network/interfaces
### Hetzner Online GmbH installimage
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface lo inet6 loopback
auto enp7s0
iface enp7s0 inet static
address 157.90.35.101
netmask 255.255.255.255
gateway 157.90.35.65
pointopoint 157.90.35.65
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 0 > /proc/sys/net/ipv4/conf/enp7s0/send_redirects
iface enp7s0 inet6 static
address 2a01:4f8:251:595a::2
netmask 128
gateway fe80::1
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
auto vmbr0
iface vmbr0 inet static
address 157.90.35.101
netmask 255.255.255.255
bridge_ports none
bridge_stp off
bridge_fd 0
bridge_maxwait 0 pre-up brctl addbr vmbr0
post-up ip route add 148.251.166.96/32 dev vmbr0
pre-down ip route del 148.251.166.96/32 dev vmbr0
post-up ip route add 148.251.166.97/32 dev vmbr0
pre-down ip route del 148.251.166.97/32 dev vmbr0
[ . . . ]
post-up ip route add 148.251.166.111/32 dev vmbr0
pre-down ip route del 148.251.166.111/32 dev vmbr0
iface vmbr0 inet6 static
address 2a01:4f8:251:595a::2
netmask 64
root@fsn1 ~ #
@yoursunny said: I'd rather use NAT'ed IPv4, so that more containers can be accommodated.
May I please ask, how many containers do you think we should accommodate?
Typically, I create one LXC container for each app or use case.
In my closet server, there are separate containers for C++ development and Go development, plus one for push-up video encoding (ffmpeg with iGPU access).
MetalVPS-fmt has dozens of accounts.
You would eventually run out of dedicated IPv4 if everyone wants an IPv4.
Thus, NAT is unavoidable.
would assigning IPv4/32s to the VMs mitigate the MAC address leak
In my closet server, I just let the MAC addresses leak, and the home router gives each container its own IPs.
Datacenter switches might not like this.
In my KVM servers, the bridge for LXC containers has NAT'ed IPv4 and routed IPv6.
It's defined in Netplan.io like this:
I see from your profile that you are new here. Welcome to LES! It is great to meet you!
Want to introduce yourself? I'm sure everyone would be interested to know your name, where you are from, and something about your Linux experience. Also, what do you want to do on the server?
I don't know whether this is at all correct, but here, quoting from above, I fixed a couple of little mistakes. The changed lines are followed by comments.
@Not_Oles said: Next up: lxc-usernet file, maybe tomorrow, it's getting late here:
fmt:~# man lxc-usernet # Works!
fmt:~# cat /etc/lxc/lxc-usernet
cat: can't open '/etc/lxc/lxc-usernet': No such file or directory
fmt:~#
Okay, it's tomorrow already!
From man lxc-usernet(5):
This file consists of multiple entries, one per line, of the form:
user type bridge number
So here's lxc-usernet:
fmt:~# ls /etc/lxc
default.conf
fmt:~# touch /etc/lxc/lxc-usernet # The busybox version of ed editor can edit existing files but seemingly can't create files!
fmt:~# cat /etc/subuid
root:100000:65536
notoles:1000000000:65536
localhost:1002000000:65536
Fritz:1005000000:65536
yoursunny:1018000000:65536
subenhon:1022000000:65536
fmt:~# ed /etc/lxc/lxc-usernet # ed editor allows me to see what was printed in the terminal just before I started editing.
0
a
notoles veth lxcbr0 1 # I am guessing that root doesn't need an entry in the lxc-usernet file, but I am not sure.
localhost veth lxcbr0 1
Fritz veth lxcbr0 1
yoursunny veth lxcbr0 1
subenhon veth lxcbr0 1
.
w
113
q
fmt:~#
LXC still is not expected to work yet partly because lxcbr0 isn't present because /etc/network/interfaces hasn't been updated yet. There is no bridge in the current setup. I sent the possible new interfaces configuration to Cloudie, so we will see what he says.
Hi @yoursunny! Thanks for letting me know your preference! Would it be okay with you to leave the lxc-usernet number at 1 until we hopefully find that everyone can make a container that works? Then we could increase the number. However, if you or anyone else needs a larger number from the beginning, please just let me know, and I will increase the number.
I see from your profile that you are new here. Welcome to LES! It is great to meet you!
Want to introduce yourself? I'm sure everyone would be interested to know your name, where you are from, and something about your Linux experience. Also, what do you want to do on the server?
I'm looking forward to hearing more from you!
Best wishes and kindest regards,
Tom
Hi
Actually I'm new to this forum kind of stuffs , More like this is first forum I've joined .
And I'm Actually from India,
I've been Using a Debian 11 for nearly a year , I'm new to linux community though
I've been looking for a free server to host some of python projects , most likely telegram and Discord Bots That I've made ,
Also I've never used any other distro than debian, i would love to explore a new distro :-)
I see from your profile that you are new here. Welcome to LES! It is great to meet you!
Want to introduce yourself? I'm sure everyone would be interested to know your name, where you are from, and something about your Linux experience. Also, what do you want to do on the server?
I'm looking forward to hearing more from you!
Best wishes and kindest regards,
Tom
Hi
Actually I'm new to this forum kind of stuffs , More like this is first forum I've joined .
And I'm Actually from India,
I've been Using a Debian 11 for nearly a year , I'm new to linux community though
I've been looking for a free server to host some of python projects , most likely telegram and Discord Bots That I've made ,
Also I've never used any other distro than debian, i would love to explore a new distro :-)
Thanks for your intro! Again, it's nice to have you with us! I hope you enjoy the server!
I made your account already. Your password is in a file in your home directory. Please change your password when you have a chance. Since password login has been disabled, I need your ed25519 ssh public key. If you kindly post your key, I will drop it into your account. Then you hopefully will be able to log in via IPv4 or IPv6 using your key.
Wonderful to hear from you! I see that you have been here on LES for awhile, and that your profile might have less activity than some others. Would you like to introduce yourself by telling us who and where you are plus something about your Linux experience? Also, what do you want to do on the server?
I'm looking forward to reading some interesting solutions which might be posted by some of the other guys.
MetalVPS doesn't make any backups. Quoting from the OP:
Please make your own redundant, offsite backups! It's easy to download or sync or clone your backup to a safe place. Please also make sure that you actually can restore from your backups! Please think of your MetalVPS account as ephemeral! It might blow up! We or you might reinstall the node! 🤦♂️
For myself, I'm like you. I make tar archives and download them with scp. Once in a while I use rsync. One of the users over at Darkstar used Borgbackup. Borgbackup seemed to work pretty well! There's also rclone, which I haven't tried yet.
A couple of users have, cleverly, I think, "eliminated" the backup process by "reversing" the common use pattern of the server. These guys never leave hardly anything on the server. Maybe a couple of almost empty directories. When they want to use the server, they bring in their files with something like ansible or rsync.
It's important to make sure backups actually can be restored. I make sha256sums to check the downloads, and then I untar the archives and check the files. Often I make sha256sums of individual files if they are important.
Hope this helps! Best wishes!
P.S. @terrorgen I'd be interested to hear more about any new backup procedures that you try. @Everyone, what are you using for backups?
@terrorgen said:
Been using metalvps as a build machine for the multiple VPSs running NixOS. Compiling software from source on this thing is fast!
It usually take my own PC 20 minutes to finish compiling Zerotier (NixOS doesn't keep its binary on their repo for licensing reasons) but just couple minutes on metalvps!
It's so great that you are doing this! I'd be interested to hear more details about whatever you are installing on the server with Nix. Congrats again on getting Nix working!
@Not_Oles said: LXC still is not expected to work yet partly because lxcbr0 isn't present because /etc/network/interfaces hasn't been updated yet. There is no bridge in the current setup. I sent the possible new interfaces configuration to Cloudie, so we will see what he says.
Haven't yet heard back from @Cloudie. He is a great guy! Sometimes he gets busy, and that's okay.
Later today or tomorrow I might mess around with the networking a little to see whether I can get LXC going. There might be some downtime. Please remember that you can check the IPv4 and IPv6 monitors.
If you are running anything that needs stability, please let me know and I will hold off awhile. Thanks!
I see from your profile that you are new here. Welcome to LES! It is great to meet you!
Want to introduce yourself? I'm sure everyone would be interested to know your name, where you are from, and something about your Linux experience. Also, what do you want to do on the server?
I'm looking forward to hearing more from you!
Best wishes and kindest regards,
Tom
Hi
Actually I'm new to this forum kind of stuffs , More like this is first forum I've joined .
And I'm Actually from India,
I've been Using a Debian 11 for nearly a year , I'm new to linux community though
I've been looking for a free server to host some of python projects , most likely telegram and Discord Bots That I've made ,
Also I've never used any other distro than debian, i would love to explore a new distro :-)
Thanks for your intro! Again, it's nice to have you with us! I hope you enjoy the server!
I made your account already. Your password is in a file in your home directory. Please change your password when you have a chance. Since password login has been disabled, I need your ed25519 ssh public key. If you kindly post your key, I will drop it into your account. Then you hopefully will be able to log in via IPv4 or IPv6 using your key.
@terrorgen said:
Connection to 1.1.1.1 and 8.8.8.8 seems to not work for some reasons
fmt:~$ mtr 1.1.1.1
-ash: mtr: not found
fmt:~$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets
1 10.100.4.1 (10.100.4.1) 0.228 ms 0.250 ms 0.198 ms
2 149.112.26.23 (149.112.26.23) 0.480 ms 0.293 ms 0.168 ms
3 45.45.210.98 (45.45.210.98) 0.346 ms 0.370 ms 0.413 ms
4 * * *
5 * * *
6^C
149.112.26.23 is Lambda-IX.
45.45.210.98 is OHANACRAFT LLC game hosting.
Route leak happening?
Comments
Been using metalvps as a build machine for the multiple VPSs running NixOS. Compiling software from source on this thing is fast!
It usually take my own PC 20 minutes to finish compiling Zerotier (NixOS doesn't keep its binary on their repo for licensing reasons) but just couple minutes on metalvps!
The all seeing eye sees everything...
Okay, here are the /etc/sub*id files.
👨💻
I hope everyone gets the servers they want!
I don't buy this; it seems a lot like complaints about ipv6 using more resources than ipv4: it may exist in theory, but it's unlikely to make a big impact in practice.
Ooh, completely missed this. Thanks for the info!
Today's updates; add bridge-utils-doc
I hope everyone gets the servers they want!
New /etc/network/interfaces ?
Currently installed:
Is it too crazy just to give each container its own static WAN IPv4 and IPv6/64 ?
Does the following look right for a first pass at the new /etc/network/interfaces?
Next up:
lxc-usernet
file, maybe tomorrow, it's getting late here:I hope everyone gets the servers they want!
I'd rather use NAT'ed IPv4, so that more containers can be accommodated.
Bridging containers directly on the physical port may result in the containers' MAC addresses becoming visible on the physical network.
I don't know about Cloudie, but doing this in KVM would get filtered in Virtualizor, and doing this on Hetzner would trigger infraction warning letter.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
All working> @Not_Oles said:
All working ,thank you for everything you do
hey ,
What are The Requirements to get this :-P
hey ,
Congrats on your First Post
"A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)
Just tell @Not_Oles how great he is so he can add the quote to his wall of self-gratification.
Hi @yoursunny!
Thanks for your helpful, relevant, and interesting comments!
May I please ask, how many containers do you think we should accommodate?
What we previously did at Hetzner was assign VMs an IPv4/32. If I understand correctly, having the VMs use link layer prevented the issue of the VM MAC addresses becoming visible. A difference between what formerly was at Hetzner and what is happening now at Cloudie was that, at Hetzner, the extra IPs were out of band relative to the server's main IPv4. The /etc/network/interfaces file formerly in use at Hetzner looked something like that shown below.
May I please ask, assuming for the purpose of the question that we stick to assigning VMs individual IPv4s and do not do NAT, would assigning IPv4/32s to the VMs mitigate the MAC address leak without having the extra IPs out of band with respect to the main IP? If no, could adding an additional main IPv4 and gateway from Cloudie plus using link layer solve the MAC address leak problem as it apparently did at Hetzner?
May I please ask what do others think about whether we should do NAT or assign individual IPs? And, how many containers do others think we should have?
I hope everyone gets the servers they want!
Typically, I create one LXC container for each app or use case.
In my closet server, there are separate containers for C++ development and Go development, plus one for push-up video encoding (ffmpeg with iGPU access).
MetalVPS-fmt has dozens of accounts.
You would eventually run out of dedicated IPv4 if everyone wants an IPv4.
Thus, NAT is unavoidable.
In my closet server, I just let the MAC addresses leak, and the home router gives each container its own IPs.
Datacenter switches might not like this.
In my KVM servers, the bridge for LXC containers has NAT'ed IPv4 and routed IPv6.
It's defined in Netplan.io like this:
There's also a systemd service that sets static binding of container IP ranges (each container gets IPv4 /24 and IPv6 /116) and MAC addresses.
I don't know what to do in interfaces file, as it's no longer preferred for Ubuntu.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
Hi @raveen2k3!
I see from your profile that you are new here. Welcome to LES! It is great to meet you!
Want to introduce yourself? I'm sure everyone would be interested to know your name, where you are from, and something about your Linux experience. Also, what do you want to do on the server?
I'm looking forward to hearing more from you!
Best wishes and kindest regards,
Tom
I hope everyone gets the servers they want!
I don't know whether this is at all correct, but here, quoting from above, I fixed a couple of little mistakes. The changed lines are followed by comments.
I hope everyone gets the servers they want!
Okay, it's tomorrow already!
From man lxc-usernet(5):
So here's lxc-usernet:
LXC still is not expected to work yet partly because lxcbr0 isn't present because /etc/network/interfaces hasn't been updated yet. There is no bridge in the current setup. I sent the possible new interfaces configuration to Cloudie, so we will see what he says.
Best wishes and kindest regards!
I hope everyone gets the servers they want!
Need larger numbers, e.g. 16.
root account shouldn't be used except for modifying system settings.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
Hi @yoursunny! Thanks for letting me know your preference! Would it be okay with you to leave the lxc-usernet number at 1 until we hopefully find that everyone can make a container that works? Then we could increase the number. However, if you or anyone else needs a larger number from the beginning, please just let me know, and I will increase the number.
I hope everyone gets the servers they want!
Tshark and friends
I hope everyone gets the servers they want!
Thanks ;-)
Hi
Actually I'm new to this forum kind of stuffs , More like this is first forum I've joined .
And I'm Actually from India,
I've been Using a Debian 11 for nearly a year , I'm new to linux community though
I've been looking for a free server to host some of python projects , most likely telegram and Discord Bots That I've made ,
Also I've never used any other distro than debian, i would love to explore a new distro :-)
sounds great , i would like to have a try, thx for all
question: how do you back up data on metalvps?
my idea is to set up a cron job that would scp my home folder periodically.
Any other ideas?
The all seeing eye sees everything...
Hi @raveen2k3!
Thanks for your intro! Again, it's nice to have you with us! I hope you enjoy the server!
I made your account already. Your password is in a file in your home directory. Please change your password when you have a chance. Since password login has been disabled, I need your ed25519 ssh public key. If you kindly post your key, I will drop it into your account. Then you hopefully will be able to log in via IPv4 or IPv6 using your key.
Please keep us updated as your projects progress!
Best wishes!
Tom
I hope everyone gets the servers they want!
Hi @qmesso!
Wonderful to hear from you! I see that you have been here on LES for awhile, and that your profile might have less activity than some others. Would you like to introduce yourself by telling us who and where you are plus something about your Linux experience? Also, what do you want to do on the server?
I'm looking forward to setting up your account!
Best wishes!
Tom
I hope everyone gets the servers they want!
Hi @terrorgen!
I'm looking forward to reading some interesting solutions which might be posted by some of the other guys.
MetalVPS doesn't make any backups. Quoting from the OP:
For myself, I'm like you. I make tar archives and download them with scp. Once in a while I use rsync. One of the users over at Darkstar used Borgbackup. Borgbackup seemed to work pretty well! There's also rclone, which I haven't tried yet.
A couple of users have, cleverly, I think, "eliminated" the backup process by "reversing" the common use pattern of the server. These guys never leave hardly anything on the server. Maybe a couple of almost empty directories. When they want to use the server, they bring in their files with something like ansible or rsync.
It's important to make sure backups actually can be restored. I make sha256sums to check the downloads, and then I untar the archives and check the files. Often I make sha256sums of individual files if they are important.
Hope this helps! Best wishes!
P.S. @terrorgen I'd be interested to hear more about any new backup procedures that you try. @Everyone, what are you using for backups?
I hope everyone gets the servers they want!
It's so great that you are doing this! I'd be interested to hear more details about whatever you are installing on the server with Nix. Congrats again on getting Nix working!
I hope everyone gets the servers they want!
Haven't yet heard back from @Cloudie. He is a great guy! Sometimes he gets busy, and that's okay.
Later today or tomorrow I might mess around with the networking a little to see whether I can get LXC going. There might be some downtime. Please remember that you can check the IPv4 and IPv6 monitors.
If you are running anything that needs stability, please let me know and I will hold off awhile. Thanks!
I hope everyone gets the servers they want!
Hey,
Thanks for the reply ☺️
And here is my public key
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTSdu+z+191g3COcxlUDMS7jcJgSqtsg178O4mAnWoz
Connection to 1.1.1.1 and 8.8.8.8 seems to not work for some reasons
The all seeing eye sees everything...
149.112.26.23 is Lambda-IX.
45.45.210.98 is OHANACRAFT LLC game hosting.
Route leak happening?
Accepting submissions for IPv6 less than /64 Hall of Incompetence.