@hostEONS said:
Please note you are allowed max 20 IPv6 per VPS as too many IP can cause ARP cache issues, hence this is the limit set per VPS
This is exactly why you should provide veth network interface and routed IPv6 subnet.
There would be two entries in the router/switch, regardless of how many IPv6 addresses are in use.
@hostEONS said:
Please note you are allowed max 20 IPv6 per VPS as too many IP can cause ARP cache issues, hence this is the limit set per VPS
This is exactly why you should provide veth network interface and routed IPv6 subnet.
There would be two entries in the router/switch, regardless of how many IPv6 addresses are in use.
We cannot use veth because as far as i know Virtualizor Control Panel has no support for it, we will probably need to manually setup VPS for it.
If we use routed IP then just in case if the VPS Node IP gets nulled for any reason then even all the VPS IP will go down, we have a common pool of IP for KVM as well as OVZ VPS Nodes, so if we set routed for IP pool that's being used for OVZ it will even set as routed for KVM VPS nodes which we don't want to do.
Moreover these are low end VPS and we hardly get such special requests, for 99% of the users just 1 IPv6 is enough, and if we don't impose these limits we have seen users who try to add 100ds of IPV6 IP and then later complain why is their VPS so slow as most of the resources are being taken via the network stack and also causes ARP issues.
Sorry but we are trying to give LES deals but also trying to keep them sustainable, we cannot change our whole setup for it.
I've not tried routed IP recently, but from what i remember it was causing some issue and right now no intention to change any configuration with our existing setup.
I agree it is a great panel from a users perspective. Can't say if it is the same from the providers perspective or not. Personally I do prefer it when a potential KVM VM I am looking to purchase has a /64 of IPv6. OpenVZ 7 has limitations and one of them seems to be the way they handle IPv6 by default. Yes you can use veth network interfaces but it does add an additional layer of complexity to an already very low end product IMO.
@yoursunny said: The entire subnet is directly usable in the VPS.
Indeed, but the problem is if and when you need to set rDNS for those IPs
I'm sure the provider can help via a ticket but it would make sense if they would just allow users to add and remove custom IPs via the panel - similar to what was possible on SVM.
For maximum flexibility, the panel should delegate the reverse zone to a nameserver chosen by the user.
User can then have as many rDNS records as they want.
@yoursunny said: For maximum flexibility, the panel should delegate the reverse zone to a nameserver chosen by the user.
Yes, that would be a lot better
Some providers can and do support this, but I believe only if it's for a /48 block, or bigger.
TunnelBroker.net can delegate /64 prefix.
Thus, there's no technical reason to require /48 subnet for rDNS delegation.
I was referring to providers that are providing native IPv6, that they would normally support delegating NS auth for the subnet if it was at least a /48.
@yoursunny said:
For maximum flexibility, the panel should delegate the reverse zone to a nameserver chosen by the user.
User can then have as many rDNS records as they want.
This is cool idea! However there may be interference with how VF manages rDNS... Need to investigate further.
@TheDP said: I was referring to providers that are providing native IPv6, that they would normally support delegating NS auth for the subnet if it was at least a /48.
You can create NS record(s) for any prefix like /64.
@TheDP said: I was referring to providers that are providing native IPv6, that they would normally support delegating NS auth for the subnet if it was at least a /48.
You can create NS record(s) for any prefix like /64.
@yoursunny said:
For maximum flexibility, the panel should delegate the reverse zone to a nameserver chosen by the user.
User can then have as many rDNS records as they want.
This is the best way to accomplish the rDNS issue, let’s make a new list of providers who support this feature.
@Abdullah said:
Does vf automate routed IPv6 in some way ?
I think so.
Both @skhron and @crunchbits delivered routed IPv6 in VirtFusion.
Nice! We also use vf for kvm services. Would any of you be kind to share how to achieve this
I think this is on a per Hypervisor basis with a routed subnet for each HV, but that will make it impossible to migrate between nodes. Also won't work for a vlan setup.
@skhron said:
You only need to create server-after-boot hook on each of your hypervisor to setup routing. @Abdullah said:
I think this is on a per Hypervisor basis with a routed subnet for each HV, but that will make it impossible to migrate between nodes.
As the hypervisor can run a hook after booting a virtual machine, the hook script can either:
SSH into the hardware router(s) and insert/replace a static route for the VM prefix.
Make an intra-domain routing announcement for the VM prefix from the routing daemon on the hypervisor node. This routing announcement would be received by hardware routers and other hypervisor nodes.
@yoursunny said:
As the hypervisor can run a hook after booting a virtual machine, the hook script can either:
SSH into the hardware router(s) and insert/replace a static route for the VM prefix.
Make an intra-domain routing announcement for the VM prefix from the routing daemon on the hypervisor node. This routing announcement would be received by hardware routers and other hypervisor nodes.
In our case the routes are added to the Linux routing table and they are further propagated by iBGP, no need for quirks with SSH
Comments
LXC or OpenVZ with veth network interface would be acceptable.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
This is exactly why you should provide veth network interface and routed IPv6 subnet.
There would be two entries in the router/switch, regardless of how many IPv6 addresses are in use.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
We cannot use veth because as far as i know Virtualizor Control Panel has no support for it, we will probably need to manually setup VPS for it.
If we use routed IP then just in case if the VPS Node IP gets nulled for any reason then even all the VPS IP will go down, we have a common pool of IP for KVM as well as OVZ VPS Nodes, so if we set routed for IP pool that's being used for OVZ it will even set as routed for KVM VPS nodes which we don't want to do.
Moreover these are low end VPS and we hardly get such special requests, for 99% of the users just 1 IPv6 is enough, and if we don't impose these limits we have seen users who try to add 100ds of IPV6 IP and then later complain why is their VPS so slow as most of the resources are being taken via the network stack and also causes ARP issues.
Sorry but we are trying to give LES deals but also trying to keep them sustainable, we cannot change our whole setup for it.
I've not tried routed IP recently, but from what i remember it was causing some issue and right now no intention to change any configuration with our existing setup.
hostEONS.com - 9 Locations for VPS (US and EU).. Free Blesta License, Free Windows 2019 License and more. Shared DA Hosting. AS142036 .. Ryzen 7950X Based VDS in SLC, LA and Dallas
It's time to ditch Virtualizor.
VirtFusion panel of the year.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Cant really change panels every now and then ...
hostEONS.com - 9 Locations for VPS (US and EU).. Free Blesta License, Free Windows 2019 License and more. Shared DA Hosting. AS142036 .. Ryzen 7950X Based VDS in SLC, LA and Dallas
I agree it is a great panel from a users perspective. Can't say if it is the same from the providers perspective or not. Personally I do prefer it when a potential KVM VM I am looking to purchase has a /64 of IPv6. OpenVZ 7 has limitations and one of them seems to be the way they handle IPv6 by default. Yes you can use veth network interfaces but it does add an additional layer of complexity to an already very low end product IMO.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Does vf automate routed IPv6 in some way ?
I could not find a way to automate routed IPv6 setup using any panel.
https://webhorizon.net
Routed IPv6 is available in Virtualizor
hostEONS.com - 9 Locations for VPS (US and EU).. Free Blesta License, Free Windows 2019 License and more. Shared DA Hosting. AS142036 .. Ryzen 7950X Based VDS in SLC, LA and Dallas
Just a comment about VF, from a customer's experience and POV, I personally like it.
Things I would like it to have are:
Caught it this time!
"Recomondations"
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Fixed. Thank you for your support.
You might think that I plant these little Easter eggs for you to find on purpose, but I d ....
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
@FrankZ said:
LOL!
I hope everyone gets the servers they want!
I don't press that big red Assign Address button.
The entire subnet is directly usable in the VPS.
Just add millions of addresses in Netplan.
I think so.
Both @skhron and @crunchbits delivered routed IPv6 in VirtFusion.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Indeed, but the problem is if and when you need to set rDNS for those IPs
I'm sure the provider can help via a ticket but it would make sense if they would just allow users to add and remove custom IPs via the panel - similar to what was possible on SVM.
Thank you. That is a good thing to know.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
I never bothered with rDNS in VPS.
I only used rDNS once at a hackathon, with tunneled subnet.
https://yoursunny.com/t/2016/HackArizona/
For maximum flexibility, the panel should delegate the reverse zone to a nameserver chosen by the user.
User can then have as many rDNS records as they want.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Yes, that would be a lot better
Some providers can and do support this, but I believe only if it's for a /48 block, or bigger.
one less money making opportunity, no?
The all seeing eye sees everything...
TunnelBroker.net can delegate /64 prefix.
Thus, there's no technical reason to require /48 subnet for rDNS delegation.
Mentally strong provider do not charge money.
microLXC top provider.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
That time I fell through a hole in the datacenter floor and was reincarnated as a /64 of IPv6.
Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
I was referring to providers that are providing native IPv6, that they would normally support delegating NS auth for the subnet if it was at least a /48.
Just based on past experience.
This is cool idea! However there may be interference with how VF manages rDNS... Need to investigate further.
Check our KVM VPS (flags are clickable): 🇵🇱 🇸🇪 | Looking glass: 🇵🇱 🇸🇪
You can create NS record(s) for any prefix like /64.
Check our KVM VPS (flags are clickable): 🇵🇱 🇸🇪 | Looking glass: 🇵🇱 🇸🇪
Of course you can.
This is the best way to accomplish the rDNS issue, let’s make a new list of providers who support this feature.
VirtFusion providers, it is time for your upvotes
https://virtfusion.featureos.app/p/rdns-delegation-for-ipv6
Check our KVM VPS (flags are clickable): 🇵🇱 🇸🇪 | Looking glass: 🇵🇱 🇸🇪
Nice! We also use vf for kvm services. Would any of you be kind to share how to achieve this
I think this is on a per Hypervisor basis with a routed subnet for each HV, but that will make it impossible to migrate between nodes. Also won't work for a vlan setup.
https://webhorizon.net
You only need to create server-after-boot hook on each of your hypervisor to setup routing.
It should not, however I didn't test it yet.
I don't see any reason why vlan will interfere (our hypervisor uses vlans and has very custom network setup)
Check our KVM VPS (flags are clickable): 🇵🇱 🇸🇪 | Looking glass: 🇵🇱 🇸🇪
As the hypervisor can run a hook after booting a virtual machine, the hook script can either:
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
In our case the routes are added to the Linux routing table and they are further propagated by iBGP, no need for quirks with SSH
Check our KVM VPS (flags are clickable): 🇵🇱 🇸🇪 | Looking glass: 🇵🇱 🇸🇪