@VirMach This is not your sales thread it is just a placeholder. I was just getting ready to change the title.
Please open a blank new thread for the 2023 sales thread if you wish to continue the offer.
@tulipyun said:
My VPS in LAX will go offline after 520 days of continuous operation.
Does the migrated server support ipv6?
A few of them hit 969 days today. Sadly no four digit days, so close.
I've provided an update on the network status page but basically the facility we're moving it to said it'd be better if we proceeded tomorrow so I've been focusing on making sure all servers are in tip-top shape to avoid unexpected delays once they're re-racked. Also making some notes for any maintenance that can be completed while I'm here, and waiting on IPv4. It'll work out better as if we proceeded today as originally planned, connectivity would likely be delayed and by the time they're moved over, it could end up being too late for there to be help available at the facility should it be needed.
Good news is QN finally approved the move a few hours ago, and being at the facility, things look pretty calm, and I'm probably the only person to visit today. Cabs all still look pretty full. I guess not many other people are concerned (or perhaps they haven't found space elsewhere.)
9:30AM - We are awaiting approval for the migration. A small number of servers may be rebooted or briefly powered down as we make preparations. Before the physical migration, all servers will be rebooted, and we'll provide a short notice as an update beforehand.
10:45AM - We will begin preparing/rebooting servers to check BIOS. Controls will become available but server will be powered back on for some time before the final physical migration begins. We will try to go in order from highest numbered nodes (LAXA031) to lowest (LAXA004), and then move to the second cabinet from highest (LAX1Z021) to lowest (LAX1Z013.)
3:15PM - We've completed preparations for the first cabinet and received approval for migration. However, to avoid any delays in getting services back online as quickly as possible, the physical migrations will likely occur tomorrow morning (February 25th, 2025) as we are still waiting on third parties for IPv4 and for better availability of techs at the receiving facility.
We will continue with preparations and restore controls at approximately 5PM. Once a timeline is available for the physical migration, it will be provided. We are currently aiming for morning to afternoon on the 25th.
I don't have a guess, and I kind of understand not making a noise about it until the equipment is out of the old place, but I'd like to know before too long.
Sure I'll venture a guess.
Virmach has remote hands checking and/or trying to recover a broken drive. I don't think the node is down, only some VMs are down on TYOC027. We were just lucky enough to be on a drive that has issues. If the drive can't be recovered, it will be replaced and restored from backups. This could take a few more days or a week depending on DC hands workload and what needs to be done.
When the Los Angeles migrations are finished, I expect that VirMach may have time to deal with this, but right now it is probably not the top priority as relatively few VMs are affected. I know this probably doesn't make you feel any better about this downtime, but that is my guess. If you are really in a bind, and have your own backups to restore, PM me with what size VM you have and I'll see if there is something I can lend you in the meantime.
Of course maybe you'll get lucky and because I said it could take a week, it will be up in a hour just to show how wrong I am.
He's a solid guy and I didn't even hear about that story, just from interacting with him. I'm still upset we never got anything with them sooner. He also reached out multiple times to help out when we were going through all the previous major problems but it never panned out so I'm pretty excited we finally got something with them.
If I were to speculate, I'd say a good chance it's also QP for Chicago but just focusing on LAX for now.
@Jab said:
it's so quiet, I guess migration went pretty well?
Mine came up normally, IP works, routing works. Slightly longer than anticipated (17 hrs) but within reason for such a move.
From memory so could be wrong -- it took around 9AM to 12PM to remove everything, another 30 minutes around 2PM for the storage server. Around 12PM to 4PM for first cabinet. Around 4PM to 7:30PM to find out the switch had bad flash and completely reset configuration/fixing that for the time being. And another 2 hours for racking everything else and finishing up.
Edit -- Also I'm pretty upset we didn't get updates in earlier, I added in two or three and it seemed like it posted but I was on 5G in a basement so understandably it didn't post.
Most of the wait was QuadraNet to drop announcements (I forgot to request it in between everything else going on and they took some time to process it) as well as IP lessors setting RPKI to new network, and now waiting on carriers to pick it up so routing works globally.
Edit2 -- Lack of updates after 9:30PM was from me getting started on the EX4300 and then falling asleep, I had only slept a couple hours the last 3 days before that which I could manage but not when adding in a bunch of physical labor.
I'm currently going through backlog of tickets as well as setting up a new switch that can work reliability moving forward so we can swap it out for the one that's having issues (and we still have a backup switch, I may either set it up to where we have that as failover, or set the new one up and ship out a new replacement for that one later while having the 3rd switch as a backup backup or whatever.) The configuration on the current switch was rushed so it's possible I made an error, I had a backup of the configuration but wasn't on-hand (I'll try to remember to carry it with me if we do something like this again.)
Edit3 -- Oh and IPXO changed how they process re-assignments so it actually kept QN as well, they're working on dropping that now, this probably complicated things a little bit in terms of carriers picking up the new one. The non-IPXO subnets went up pretty quickly. I didn't notice this as soon as I should have
Comments
so, WebNX?
or Colocrossing
I bench YABS 24/7/365 unless it's a leap year.
My VPS in LAX will go offline after 520 days of continuous operation.
Does the migrated server support ipv6?
Have the honor of being the crybaby who pays $20 for a 128MB VPS at VirMach in 2023.
Drat! Was thinking of grabbing a 2xIP in LAX, thinking it might get moved to San Jose.
/jk
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Take care your ass, bro.
A few of them hit 969 days today. Sadly no four digit days, so close.
I've provided an update on the network status page but basically the facility we're moving it to said it'd be better if we proceeded tomorrow so I've been focusing on making sure all servers are in tip-top shape to avoid unexpected delays once they're re-racked. Also making some notes for any maintenance that can be completed while I'm here, and waiting on IPv4. It'll work out better as if we proceeded today as originally planned, connectivity would likely be delayed and by the time they're moved over, it could end up being too late for there to be help available at the facility should it be needed.
Good news is QN finally approved the move a few hours ago, and being at the facility, things look pretty calm, and I'm probably the only person to visit today. Cabs all still look pretty full. I guess not many other people are concerned (or perhaps they haven't found space elsewhere.)
TYOC027 has been down for 5 days since the 19 Feb.This issue was confirmed as a network issue.How long will it take to repair?
An update for LAX:
9:30AM - We are awaiting approval for the migration. A small number of servers may be rebooted or briefly powered down as we make preparations. Before the physical migration, all servers will be rebooted, and we'll provide a short notice as an update beforehand.
10:45AM - We will begin preparing/rebooting servers to check BIOS. Controls will become available but server will be powered back on for some time before the final physical migration begins. We will try to go in order from highest numbered nodes (LAXA031) to lowest (LAXA004), and then move to the second cabinet from highest (LAX1Z021) to lowest (LAX1Z013.)
3:15PM - We've completed preparations for the first cabinet and received approval for migration. However, to avoid any delays in getting services back online as quickly as possible, the physical migrations will likely occur tomorrow morning (February 25th, 2025) as we are still waiting on third parties for IPv4 and for better availability of techs at the receiving facility.
We will continue with preparations and restore controls at approximately 5PM. Once a timeline is available for the physical migration, it will be provided. We are currently aiming for morning to afternoon on the 25th.
Am I blind or is the new facility still a suspense?
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals | VirMach Flash Sales Notifier
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
you are not blind
hope migration is smooth
i don't want to fallback on my backup service which is a genoa
I bench YABS 24/7/365 unless it's a leap year.
I also had another one of my providers that was previously at QN move to a still unknown new facility.
Mystery abounds.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Any guesses for the new DC?
My bet is on psychz, but would have loved it if it were Multacom
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals | VirMach Flash Sales Notifier
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
I don't have a guess, and I kind of understand not making a noise about it until the equipment is out of the old place, but I'd like to know before too long.
Plot Twist it ends up in Tokyo
I bench YABS 24/7/365 unless it's a leap year.
@FrankZ Hello?
Sure I'll venture a guess.
Virmach has remote hands checking and/or trying to recover a broken drive. I don't think the node is down, only some VMs are down on TYOC027. We were just lucky enough to be on a drive that has issues. If the drive can't be recovered, it will be replaced and restored from backups. This could take a few more days or a week depending on DC hands workload and what needs to be done.
When the Los Angeles migrations are finished, I expect that VirMach may have time to deal with this, but right now it is probably not the top priority as relatively few VMs are affected. I know this probably doesn't make you feel any better about this downtime, but that is my guess. If you are really in a bind, and have your own backups to restore, PM me with what size VM you have and I'll see if there is something I can lend you in the meantime.
Of course maybe you'll get lucky and because I said it could take a week, it will be up in a hour just to show how wrong I am.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
@FrankZ Hello?
Can you hear me?
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
The company that never returned our hardware, or the one owned by the same one that owns QN? My bet is CC.
Okay I'll do the big reveal since IP addresses are starting to get moved and anyone can look it up. AS46261 QuickPacket, LLC
I think the best bet for any of your provider(s) moving to an unknown facility is Equinix LA3 under WebNX.
woooooooohooooooooo
out of QN
I bench YABS 24/7/365 unless it's a leap year.
Wow, it's the one that drives 6 hours to repair an important server.
No hostname left!
He's a solid guy and I didn't even hear about that story, just from interacting with him. I'm still upset we never got anything with them sooner. He also reached out multiple times to help out when we were going through all the previous major problems but it never panned out so I'm pretty excited we finally got something with them.
If I were to speculate, I'd say a good chance it's also QP for Chicago but just focusing on LAX for now.
network looks prem as always for Virmach
I bench YABS 24/7/365 unless it's a leap year.
Wahooo! I'd love to be at Quickpacket.
prem, noice!
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals | VirMach Flash Sales Notifier
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
TYOC033
oh,no
good
Will there be a possibility for migration out of LAX after everything settles from the move?
I've had a few move to Digital Realty/CoreSite. I do know of one or two that moved to LA3 though, but not under my recommendation.
it's so quiet, I guess migration went pretty well?
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Mine came up normally, IP works, routing works. Slightly longer than anticipated (17 hrs) but within reason for such a move.
From memory so could be wrong -- it took around 9AM to 12PM to remove everything, another 30 minutes around 2PM for the storage server. Around 12PM to 4PM for first cabinet. Around 4PM to 7:30PM to find out the switch had bad flash and completely reset configuration/fixing that for the time being. And another 2 hours for racking everything else and finishing up.
Edit -- Also I'm pretty upset we didn't get updates in earlier, I added in two or three and it seemed like it posted but I was on 5G in a basement so understandably it didn't post.
Most of the wait was QuadraNet to drop announcements (I forgot to request it in between everything else going on and they took some time to process it) as well as IP lessors setting RPKI to new network, and now waiting on carriers to pick it up so routing works globally.
Edit2 -- Lack of updates after 9:30PM was from me getting started on the EX4300 and then falling asleep, I had only slept a couple hours the last 3 days before that which I could manage but not when adding in a bunch of physical labor.
I'm currently going through backlog of tickets as well as setting up a new switch that can work reliability moving forward so we can swap it out for the one that's having issues (and we still have a backup switch, I may either set it up to where we have that as failover, or set the new one up and ship out a new replacement for that one later while having the 3rd switch as a backup backup or whatever.) The configuration on the current switch was rushed so it's possible I made an error, I had a backup of the configuration but wasn't on-hand (I'll try to remember to carry it with me if we do something like this again.)
Edit3 -- Oh and IPXO changed how they process re-assignments so it actually kept QN as well, they're working on dropping that now, this probably complicated things a little bit in terms of carriers picking up the new one. The non-IPXO subnets went up pretty quickly. I didn't notice this as soon as I should have