@atomi said:
Would it be possible to show that button also ppl with servers in broken nodes (like LA10GKVM14)? That way users could regenerate their services in different nodes
There's no way to differentiate between fully broken and partially broken nodes to display it only to those people.
@VirMach said: I'd actually accept these kinds of request if it was possible to keep it clean but realistically that means you get a new VM and then the old one just kind of hangs around while it's offline and takes up space until we manually verify which ones are abandoned and clear up the space.
I actually migrated out of a broken Chicago node. I have two machines as a result just one is unavailable.
@FrankZ said:
Did I miss the Ryzen migrate button for Phoenix or was that not a thing?
EDIT: Never mind Phoenix network seems to be working now, slow, but working.
Took it away while we do more migrations, we can't have it drastically change too much or it'll put the plans out of whack. It'll re-enable later tonight.
Finally got back the physical port numbers for the servers I needed, NYCB036 (formerly 101) should be fixed soon, just need to wrap up what I was working on and configure it in the switch. SJCZ004 unfortunately still down, I was going to fly in yesterday and just literally press the button then fly back but flight times didn't work out with the required 24 hour notice to access the site.
DALZ009 appears to be having problems since at least last night, didn't get a chance to send out emails or look into it yet. LAXA014 still keeps rebooting constantly, it's a loose cable or something, I asked DC to work on it but I think it got lost in all the communication. I'll bump it up since I didn't get a chance to visit myself. We might have to upgrade this to the front of the queue as our monitoring system actually also stopped hearing back from it last night. I'm hesitant to provide any updates on network status page at this point even though it's cleaned up a little in terms of our ability because it ends up burying the other mass reports, last time we forgot to bump those back up and it caused a heavy ticket load since it wasn't up top but I'll try to at least resume sending out emails.
@VirMach said: SJCZ004 unfortunately still down, I was going to fly in yesterday and just literally press the button then fly back but flight times didn't work out with the required 24 hour notice to access the site.
If DC remote hands can't press a button then maybe that location isn't working out.
@VirMach said: SJCZ004 unfortunately still down, I was going to fly in yesterday and just literally press the button then fly back but flight times didn't work out with the required 24 hour notice to access the site.
If DC remote hands can't press a button then maybe that location isn't working out.
Yeah that's already been established but at this point I've basically made myself step back and cool off, otherwise I feel like we're going to be migrating everyone around for the next 2 years before we can settle down with an appropriate set of partners that meet or exceed our bare minimum expectations. I had a whole story written out the other day but it got very ranty so I deleted it and didn't post it. The gist of it was that it seems like the entire industry is just pretty screwed up and understaffed/overworked right now or we've got insanely bad luck and there's no way to tell just based on a company's previous reputation that things like this can't happen. Even within the same company, there's a huge variance location to location.
Luckily the locations we still have left with a lot of open space to fill are with xTom, which has so far, even including the more recent cabinets we got with another company, been the clear winner. So we need to just get through this one last hurdle and once I can get down the logistics, I don't mind living in hotels for the next few years.
But yes, it's still very scary, the thought that we could actually be left in a situation (and already have been) where a server can just go down for 1-2 weeks over something so simple.
I noticed that my BF special 2020 doesn't have a Ryzen Migrate button and in it's billing panel, only CentOS 6.8/7, Debian 8.2/9.1 ISOs are available. It has other issues as well, but no problem because I'm idling it. https://lowendspirit.com/discussion/comment/93153/#Comment_93153
Holy crap, we're actually so doomed in Dallas @FrankZ they finally got back to us and said (Flexential) due to the "complexity" of the request, the request being:
Clear CMOS
Reset BMC
I think maybe I made it sound so complicated by asking them to also verify at the end. I'll figure something out for this location, JFC.
Hivelocity's the one that acquired Incero right? I don't know why they didn't come to mind (HV) when I was thinking about Dallas. We can't do it right now but there's absolutely no way we can stay with Flexential at this state, WTF.
@VirMach said:
Hivelocity's the one that acquired Incero right? I don't know why they didn't come to mind (HV) when I was thinking about Dallas. We can't do it right now but there's absolutely no way we can stay with Flexential at this state, WTF.
Yeah, know anyone in Dallas with a rack in their house? They'd be more useful & have better uptime probably.
That's just scary bad. With the headaches you've had and how tired of it all that you must be, I am impressed you didn't scream so loud at them that their eardrums bled
@VirMach said:
Hivelocity's the one that acquired Incero right? I don't know why they didn't come to mind (HV) when I was thinking about Dallas. We can't do it right now but there's absolutely no way we can stay with Flexential at this state, WTF.
I have already moved two of the three VMs I had with you in Dallas to other locations, so I am now safe from future Flexential F'ups.
I hope your contract is not too long with them given what they have shown as their level of competency.
I have no comment on Hivelocity other than to say they have a real nice network in Dallas.
Lol Flexential has one location at the Infomart (1950 N. Stemmons Freeway). I used to work in Dallas many years ago and we had servers at the Infomart in Broadwing (at the time, Level 3 I think bought them?) on 5th or 6th floor (I forget, it was long time ago lol) and a small private space on one of the other floors where I spent most of my time.
I only have one VM in dallas at this point but was debating moving another there.. Think i'll wait on that lol
@VirMach
First, the good news:
My Chicago VPS migrated to Ryzen Chicago. With a simple network reconfiguration in SolusVM, everything went smoothly. Yippee! Now got my tertiary nameserver back up and running. Thanks muchly for the upgrade.
Now, the bad news:
Once again ATL is having issues, after being fine for a week or two. This morning, node 7 if I remember correctly, went down and is dead to the Client Area. "Ach no probs.", I thought, just move my (secondary) nameserver back to the already setup VPS on ATLZ005. Started to go well but began to have connectivity issues with DNS. Hmm. It appears that another VM has nicked my IP address. (23.147.xxx.0 subnet)
Trying the Ryzen Fix IP solution.. (goes off to do other stuff for 1/4hour.)
@AlwaysSkint said: Now, the bad news:
Once again ATL is having issues, after being fine for a week or two. This morning, node 7 if I remember correctly, went down and is dead to the Client Area. "Ach no probs.", I thought just move my (secondary) nameserver back to the already setup VPS on ATLZ005. Started to go well but began to have connectivity issues with DNS. Hmm. I appears that another VM has nicked my IP address. (23.147.227.0 subnet)
And the saga continues, eh? ..
Yeah, I saw that as well as a few other things happen and at this point I'm trying to figure out if I actually need to just start a VirBot cloning program so we can ship me off to live in a cage at every datacenter.
We've added an addon for all the people over the last few months that have indicated their extreme displeasure with their service going offline, claiming that as a result they lost thousands. There have also been people that were highly sensitive to IP address changes, location changes, and were in a situation where it was an extremely important production server that required custom solutions and more precise notices and scheduled maintenance windows.
For $800 per month per service we would be able to set up additional monitoring, an account manager with direct contact, and be able to discuss a plan to ensure we can actually minimize downtime on your instance. We'd also be able to effectively in the future ensure that we can meet your customized requirements and make different business decisions for your node. For example, if an IP provider increases costs astronomically, we could still keep it if it's important to you, or we can maintain a node we would have otherwise decommissioned for you.
I don't know if anyone will make this purchase due to the cost, but we basically based the pricing and extrapolated it to where if enough people make a purchase, we could theoretically have a dedicated 24x7 operation of network engineers, system administrators, phone support, covering emergency travel costs, and so on. Basically creating an environment where we could maximize doing absolutely everything physically possible to ensure your production server remains accessible. I've already been thinking about this for some time so I do have an idea of how many people I could theoretically personally support to that level, and then I scaled it accordingly to reach the pricing and then made it look nicer by rounding down.
We finally have an option that could genuinely prevent you from being in a situation where you're frustrated as you're losing millions while your service is inaccessible. We'd even keep an almost-live copy of your service ready to deploy immediately.
If you do not purchase this addon because you're in a situation where it's worth less than $25 per day to have your data, communication with us, or your service online, I do recommend you consider at least doing what you can on your end, such as purchasing a duplicate service, otherwise the default assumption for your let's say $20 a year service will be that it has a value of $20 per year to you and nothing more. This will continue to ensure that our SLA is enough for you and hopefully prevent any situation where requesting SLA credits does not completely solve your issue.
@Virmach - my fallback secondary server is now working again. The Ryzen Fix IP function did the trick. Phew!
One less thing on your plate. Like that will make a difference!
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@VirMach said:
Yeah, I saw that as well as a few other things happen and at this point I'm trying to figure out if I actually need to just start a VirBot cloning program so we can ship me off to live in a cage at every datacenter.
Careful, some of the ones around here might get a copy of your clone and do unspeakable things with it. Or your clone army might mutiny after dealing with some of the braindead things you've had to deal with.
We've added an addon for all the people over the last few months that have indicated their extreme displeasure with their service going offline, claiming that as a result they lost thousands.
I really don't know why people try for that claim. At best it means they are too stupid and too cheap to have redundancy & backups for something that makes them that much money. At worst, it just shows they are completely full of shit about every topic, even something as minor as a $20/year service. Either way, you come out looking bad AND your service is still down
Full disclosure: I am a cheapskate, as reflected in my nick. To compensate, my earnings are also crap. Therefore I only have the potential to lose hundreds of bucks.
(Nearly) every penny counts - if I could only resist great deals. Anyone wanna buy/transfer a 2.5GB Ryzen Special? Only one owner, little used and very reliable (I think).
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@AlwaysSkint said: @Virmach - my fallback secondary server is now working again. The Ryzen Fix IP function did the trick. Phew!
One less thing on your plate. Like that will make a difference!
I've had same issue after migration that some other VPS uses same IP and there are no Ryzen IP fix available at NYCB018. So I have fast Ryzen idler with semiworking network
@AlwaysSkint said: @Virmach - my fallback secondary server is now working again. The Ryzen Fix IP function did the trick. Phew!
One less thing on your plate. Like that will make a difference!
I've had same issue after migration that some other VPS uses same IP and there are no Ryzen IP fix available at NYCB018. So I have fast Ryzen idler with semiworking network
I've gone down these dozens of times at this point and checked for broken IP assignments, looks like I missed one on NYCB018 because I'm semi-dyslexic and it was .212 instead of .221 on a single VM. I've corrected that one now.
Anyone know if CC Buffalo is on some IP blacklists? I have trouble reaching my server there from Comcast where I am right now, but it works fine from just about everywhere else. One person has been able to reproduce the issue from another location on Cox network. The server itself is fine afaict. I have not yet had the energy to compare packet traces at both ends, but will do that tomorrow. This is trés weird.
@willie said: Anyone know if CC Buffalo is on some IP blacklists?
I have normally found over the years that most/almost all CC IPs are on uceprotect.net & many are on barracudacentral.org.
That said, anybody who blocks based on uceprotect.net is not helping anybody IMO.
You can check your IP here just put your IP in the search box and then click the RBL tab.
Comments
There's no way to differentiate between fully broken and partially broken nodes to display it only to those people.
Did I miss the Ryzen migrate button for Phoenix or was that not a thing?
EDIT: Never mind Phoenix network seems to be working now, slow, but working.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
I actually migrated out of a broken Chicago node. I have two machines as a result just one is unavailable.
Took it away while we do more migrations, we can't have it drastically change too much or it'll put the plans out of whack. It'll re-enable later tonight.
Finally got back the physical port numbers for the servers I needed, NYCB036 (formerly 101) should be fixed soon, just need to wrap up what I was working on and configure it in the switch. SJCZ004 unfortunately still down, I was going to fly in yesterday and just literally press the button then fly back but flight times didn't work out with the required 24 hour notice to access the site.
DALZ009 appears to be having problems since at least last night, didn't get a chance to send out emails or look into it yet. LAXA014 still keeps rebooting constantly, it's a loose cable or something, I asked DC to work on it but I think it got lost in all the communication. I'll bump it up since I didn't get a chance to visit myself. We might have to upgrade this to the front of the queue as our monitoring system actually also stopped hearing back from it last night. I'm hesitant to provide any updates on network status page at this point even though it's cleaned up a little in terms of our ability because it ends up burying the other mass reports, last time we forgot to bump those back up and it caused a heavy ticket load since it wasn't up top but I'll try to at least resume sending out emails.
Re-enabled.
Chicago going up today.
Any additional details about datacenter&networks?
If DC remote hands can't press a button then maybe that location isn't working out.
Yeah that's already been established but at this point I've basically made myself step back and cool off, otherwise I feel like we're going to be migrating everyone around for the next 2 years before we can settle down with an appropriate set of partners that meet or exceed our bare minimum expectations. I had a whole story written out the other day but it got very ranty so I deleted it and didn't post it. The gist of it was that it seems like the entire industry is just pretty screwed up and understaffed/overworked right now or we've got insanely bad luck and there's no way to tell just based on a company's previous reputation that things like this can't happen. Even within the same company, there's a huge variance location to location.
Luckily the locations we still have left with a lot of open space to fill are with xTom, which has so far, even including the more recent cabinets we got with another company, been the clear winner. So we need to just get through this one last hurdle and once I can get down the logistics, I don't mind living in hotels for the next few years.
But yes, it's still very scary, the thought that we could actually be left in a situation (and already have been) where a server can just go down for 1-2 weeks over something so simple.
I noticed that my BF special 2020 doesn't have a Ryzen Migrate button and in it's billing panel, only CentOS 6.8/7, Debian 8.2/9.1 ISOs are available. It has other issues as well, but no problem because I'm idling it.
https://lowendspirit.com/discussion/comment/93153/#Comment_93153
Holy crap, we're actually so doomed in Dallas @FrankZ they finally got back to us and said (Flexential) due to the "complexity" of the request, the request being:
I think maybe I made it sound so complicated by asking them to also verify at the end. I'll figure something out for this location, JFC.
Hivelocity's the one that acquired Incero right? I don't know why they didn't come to mind (HV) when I was thinking about Dallas. We can't do it right now but there's absolutely no way we can stay with Flexential at this state, WTF.
Yeah HiVelocity bought Incero. https://www.hivelocity.net/blog/hivelocity-acquires-dallas-iaas-provider-incerocom/
Yeah, know anyone in Dallas with a rack in their house? They'd be more useful & have better uptime probably.
That's just scary bad. With the headaches you've had and how tired of it all that you must be, I am impressed you didn't scream so loud at them that their eardrums bled
https://www.tomshardware.com/news/japanese-government-invests-dollar680-million-in-kioxia-wd-fab
New NVMe for new builds?
I bench YABS 24/7/365 unless it's a leap year.
I have already moved two of the three VMs I had with you in Dallas to other locations, so I am now safe from future Flexential F'ups.
I hope your contract is not too long with them given what they have shown as their level of competency.
I have no comment on Hivelocity other than to say they have a real nice network in Dallas.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Lol Flexential has one location at the Infomart (1950 N. Stemmons Freeway). I used to work in Dallas many years ago and we had servers at the Infomart in Broadwing (at the time, Level 3 I think bought them?) on 5th or 6th floor (I forget, it was long time ago lol) and a small private space on one of the other floors where I spent most of my time.
I only have one VM in dallas at this point but was debating moving another there.. Think i'll wait on that lol
@VirMach
First, the good news:
My Chicago VPS migrated to Ryzen Chicago. With a simple network reconfiguration in SolusVM, everything went smoothly. Yippee! Now got my tertiary nameserver back up and running. Thanks muchly for the upgrade.
Now, the bad news:
Once again ATL is having issues, after being fine for a week or two. This morning, node 7 if I remember correctly, went down and is dead to the Client Area. "Ach no probs.", I thought, just move my (secondary) nameserver back to the already setup VPS on ATLZ005. Started to go well but began to have connectivity issues with DNS. Hmm. It appears that another VM has nicked my IP address. (23.147.xxx.0 subnet)
Trying the Ryzen Fix IP solution.. (goes off to do other stuff for 1/4hour.)
And the saga continues, eh? ..
[Edited for typos.]
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Yeah, I saw that as well as a few other things happen and at this point I'm trying to figure out if I actually need to just start a VirBot cloning program so we can ship me off to live in a cage at every datacenter.
Announcement:
We've added an addon for all the people over the last few months that have indicated their extreme displeasure with their service going offline, claiming that as a result they lost thousands. There have also been people that were highly sensitive to IP address changes, location changes, and were in a situation where it was an extremely important production server that required custom solutions and more precise notices and scheduled maintenance windows.
For $800 per month per service we would be able to set up additional monitoring, an account manager with direct contact, and be able to discuss a plan to ensure we can actually minimize downtime on your instance. We'd also be able to effectively in the future ensure that we can meet your customized requirements and make different business decisions for your node. For example, if an IP provider increases costs astronomically, we could still keep it if it's important to you, or we can maintain a node we would have otherwise decommissioned for you.
I don't know if anyone will make this purchase due to the cost, but we basically based the pricing and extrapolated it to where if enough people make a purchase, we could theoretically have a dedicated 24x7 operation of network engineers, system administrators, phone support, covering emergency travel costs, and so on. Basically creating an environment where we could maximize doing absolutely everything physically possible to ensure your production server remains accessible. I've already been thinking about this for some time so I do have an idea of how many people I could theoretically personally support to that level, and then I scaled it accordingly to reach the pricing and then made it look nicer by rounding down.
We finally have an option that could genuinely prevent you from being in a situation where you're frustrated as you're losing millions while your service is inaccessible. We'd even keep an almost-live copy of your service ready to deploy immediately.
If you do not purchase this addon because you're in a situation where it's worth less than $25 per day to have your data, communication with us, or your service online, I do recommend you consider at least doing what you can on your end, such as purchasing a duplicate service, otherwise the default assumption for your let's say $20 a year service will be that it has a value of $20 per year to you and nothing more. This will continue to ensure that our SLA is enough for you and hopefully prevent any situation where requesting SLA credits does not completely solve your issue.
Brilliant!
———-
blog | exploring visually |
@Virmach - my fallback secondary server is now working again. The Ryzen Fix IP function did the trick. Phew!
One less thing on your plate. Like that will make a difference!
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Careful, some of the ones around here might get a copy of your clone and do unspeakable things with it. Or your clone army might mutiny after dealing with some of the braindead things you've had to deal with.
I really don't know why people try for that claim. At best it means they are too stupid and too cheap to have redundancy & backups for something that makes them that much money. At worst, it just shows they are completely full of shit about every topic, even something as minor as a $20/year service. Either way, you come out looking bad AND your service is still down
Full disclosure: I am a cheapskate, as reflected in my nick. To compensate, my earnings are also crap. Therefore I only have the potential to lose hundreds of bucks.
(Nearly) every penny counts - if I could only resist great deals. Anyone wanna buy/transfer a 2.5GB Ryzen Special? Only one owner, little used and very reliable (I think).
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I've had same issue after migration that some other VPS uses same IP and there are no Ryzen IP fix available at NYCB018. So I have fast Ryzen idler with semiworking network
Hmm, not a good sign for me, where I'm missing the two additional IPs. Nearly as bad as one being 'stolen'.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I need a vps around $5/year, who can push one for me.
I've gone down these dozens of times at this point and checked for broken IP assignments, looks like I missed one on NYCB018 because I'm semi-dyslexic and it was .212 instead of .221 on a single VM. I've corrected that one now.
Anyone know if CC Buffalo is on some IP blacklists? I have trouble reaching my server there from Comcast where I am right now, but it works fine from just about everywhere else. One person has been able to reproduce the issue from another location on Cox network. The server itself is fine afaict. I have not yet had the energy to compare packet traces at both ends, but will do that tomorrow. This is trés weird.
I have normally found over the years that most/almost all CC IPs are on uceprotect.net & many are on barracudacentral.org.
That said, anybody who blocks based on uceprotect.net is not helping anybody IMO.
You can check your IP here just put your IP in the search box and then click the RBL tab.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add