@VirMach said: I am guessing it still has to do on Evocative's end with the router. I've spent a lot of time going down a rabbit hole and everything seems to point to their router having some problem that's causing visibility issues. It's announced properly, RPKI is correct, shows up on everything, but it looks like it's only getting picked up by only a few carriers, except if you dig into it everything's already updated pretty much everywhere.
Looking at major transit looking glasses that have a problem with it, I see that it's definitely making it all the way to the facility, and then poof. Anyway, that's just my guess. At this point I'll play my "I'm not a network engineer" card.
Oh by the way for anyone interested, it was exactly this. I wish I wrote out my long version of the message so I could sound smarter in hindsight, but when I was doing the testing, there were two IP addresses I saw for routers, one router was the one that worked. One was the one that didn't work and any time that one got picked, it'd cause the issue.
I mentioned this in my first message to them with the exact IP addresses of the routers, I guess it just took a while for them to go through their checklist and then get there eventually.
One of the routers wasn't set up to actually process anything for our announcements, the other one was.
But of course first they asked for MTR, add a long pause between all these, then they asked us why the MTR was cut off (because it wasn't cut off, it was the full MTR and that's where it cut off) and then why it was working to the switch (we never said it didn't work tot he switch, just that the announcements were having issues when going through those two routers.)
Right now we're going through and manually fixing a bunch of issues caused by SolusVM having no proper system to use subnets by IDs but rather it goes based off what IP address gets sorted arbitrarily based on maybe the first octet being greater than the other. Technically our fault for not following their logic exactly, but anyway this means the ones that got the wrong IP address won't work. So if yours doesn't work, it's wrong, and you'll get another IP change basically. It's a lot more annoying and complicated than that on our end though to correct it and I can't really explain it very well. Just imagine if we have to empty one out to fill the other so it's double the work and will take some time.
@summer9425 said:
I checked the network status page, which shows the node is online. But I indeed can not use it
Is there anything wrong, why god won't bless me
You sure you did?
@DeviousDev said: Well you get migrated to NYC ? i tried 3 time raising a ticket and they all got denied and i cannot figure out why
If you got denied first time then there is no reason to open more tickets, it will be all denied and you will even get more flags on your account. Looking at this behavior I assume you didn't read the fine print that it's only for people that behaved nicely in past - for example people that did not open tickets on nodes that are clearly listed in Network Status - like VirMach said (paraphrasing) if you didn't bother to read status you won't get extra/special treatment - you gonna wait like rest. You managed to open 3 tickets about this [while posting here] - I am like 89.89% sure you had moar tickets in the past and this is why you are no go / account flagged for not having 'extra/special' treatment :-)
Oh, sorry. It's embarrassing, I checked on the status page https://statusbot.virm.ac/en/ because I bookmarked it, which said this node is online. I forgot to check out knowledge base.
@bakageta said:
Speaking of transfers, did anyone end up with a large storage VPs in LA that they want to part with? I could use like 2-4tb if anyone decided it's not a good fit for them. Probably a long shot, I don't know how many other people bought these big plans, but still worth a shot.
What pricing are you looking for? I can probably generate it for you.
I was hoping to hit around the non-earlybird pricing from sale on these, around $120/yr for a Storage -4T
@bakageta said:
Speaking of transfers, did anyone end up with a large storage VPs in LA that they want to part with? I could use like 2-4tb if anyone decided it's not a good fit for them. Probably a long shot, I don't know how many other people bought these big plans, but still worth a shot.
What pricing are you looking for? I can probably generate it for you.
I was hoping to hit around the non-earlybird pricing from sale on these, around $120/yr for a Storage -4T
You gotta pump those numbers down. Those are rookie numbers in this racket.
@NerdUno said:
Our site has been down for WEEKS. Run from these guys. Nobody could use their resources in a business unless you're just eager to go bankrupt.
Why are you running a business critical site without HA? No one should have trusted you to run infrastructure in the first place lol. All my sites and services are still up and working fine, be mentally strong and build your infra to be resilient to VM downtime.
@NerdUno said:
Our site has been down for WEEKS. Run from these guys. Nobody could use their resources in a business unless you're just eager to go bankrupt.
I'm calling BS on this one.
@NerdUno said:
This may come as a surprise to you, @VirMach, but we monitor the offerings and performance of dozens of providers worldwide so that our users can make informed decisions moving forward. The easy way would be to simply write your service off and move on. But we have a number of users that already use your platform so we'll hang in there, but I appreciate the refund offer. And hopefully your service will get over these speed bumps and improve in coming months. Best of luck.
I am currently traveling in mostly remote areas until sometime in April 2024. Consequently DM's sent to me will go unanswered during this time.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
@NerdUno said:
Our site has been down for WEEKS. Run from these guys. Nobody could use their resources in a business unless you're just eager to go bankrupt.
You claim to be "doing this for 30 years" but you still use a single server for websites with no backups? Sounds like a you problem.
@reb0rn said:
LAXA025 I guess IP4 still have no propagated? I have ip in range 66.59.196.x
It's in a weird state where we're planning for IPv4 changes at the same time so you'll be assigned new IPv4 but still waiting on QN networking team for them to officially go up. Not that many people affected so we've been dealing with it per-person basis in tickets when it's reported for that, if we end up migrating more than maybe 50 people there with these new IPv4 we'll add in network status page. We just don't want to clutter an already-cluttered page as much as possible.
@reb0rn said:
LAXA025 I guess IP4 still have no propagated? I have ip in range 66.59.196.x
Your situation isn't the one described, it was leftover 20-30% of people in Denver finally moved. The IPv4 will change, to a non-functioning one. So you're one step before that.
(edit) For Denver ones it's possible we give you 47.87.xx.xx and then send IP change notice.
@AlwaysSkint said:
(@Virmach) That's my ATL (nameserver) just gone down - node ATL-Z010. It has been fine up until now.
First issue was Dedipath dropped announcements I think, for the two blocks. New issue could be carrier cut off something since the blend I think was Flex + Lumen and we don't know the ratio, but honestly with Flex, I wouldn't be surprised if they assured us we're good until the transition and then just cut off power.
So far it's been 5 days since we signed on with them I think, maybe 6 days, and the sales guy stopped replying, the equipment retrieval guy stopped replying, and their onboarding team wants until Thursday to even begin a call for anything. Can't say I'm surprised but at the same time I had some hope it'd actually be the 5 days setup they said for that location.
Flexential. has been a headache to deliver the hardware to us, Jhon never answered our messages, we only got a response from Arianna from sales who kindly helped us to recover the hardware, although we lost our switch, since they did not deliver this until now.
Today's entire focus will try to be finally actually getting our new space in LAX QuadraNet up and running by midnight or so (18 hours) but that will likely have to change if we get a bunch of tickets for Atlanta. So I'm hoping we've already provided ample alarms and information over the last few weeks where people realize there's less than 100% chance anything at Dedipath will stay up.
We'll also try to resume Seattle migrations to LAX, shifting back to loading in all of Seattle. I think it's reached a point where we still don't have a solid timeline after 2 weeks, that was my internal deadline for trying to get that to work 100% for that, so we might now try to shift to deploying from backups (again) in LAX, and just move forward with Seattle in the next few months instead if possible to keep that location. Unless this all changes and we actually get any actual solid date from all parties.
I realize DALZ004 is pretty far down the priority list, so I'm just throwing this out there...I've got backups of my data and would be fine with a wipe and rebuild if that saves you any hassle or time, rather that trying to migrate the existing image or recovering from your corrupted backups.
@JBB said:
I realize DALZ004 is pretty far down the priority list, so I'm just throwing this out there...I've got backups of my data and would be fine with a wipe and rebuild if that saves you any hassle or time, rather that trying to migrate the existing image or recovering from your corrupted backups.
I suspect others might be, too.
Doesn't end up saving time because we can't coordinate it well right now and our script to do that is unreliable and clunky right now, but we could consider it maybe after today if we have time to set it up to where it's solid.
I've been wanting to talk about this thing we've been working on, sadly I don't think it will be ready in time for any of this, or maybe ever, but basically it's really cool and we want to auto detect when there's a possible outage and let people auto redeploy onto an available nearby node, and then load in from backups automatically if they're available. We have most the parts working individually right now and just need to weld it together (the hardest part.) The fact that even one of these parts isn't working out too well right now though, yeah, might be a while.
But once it's done it will also be used as a loading in backup feature (we won't load in backups directly, it'll use same redeployment technique first) kind of like a bootleg terrible snapshot loaded in, that'd likely be a paid feature though to help fund more reliable offsite additional backups so we're better immediately prepared for any company going out of business. The redeployment also helps avoid problems where let's say a node is having disk problems or other issues since it's down if it gets used for the feature in the paragraph above.
And it'd also be used as a general paid "I want another service right now but temporarily" where it loads up the second service in a new tab on your service details page, or moves the current one off to a second tab and keeps the new one on the first and it can be switched back and forth with an expiry time for the one you don't have marked as primary. It'd be mainly paid so we don't have to worry about people abusing it but also I guess to make money is a good reason too for introducing a new feature, that's just not the primary goal though and I'd make it free if it didn't definitely end up requiring providing additional support time. This would be the same end feature that's available for free but only if a node is detected as having issues or offline (later on it will also include other issues like overloading, disk issues, and so on.)
Everything I explained will be one "thing" that's just repurposed for all the above, it'll also be used for future mass moves most likely since the system can handle all parts of the process better. This actually started getting some traction as a project when we were originally doing backups for potential Dedipath out of business scenario August 1st or so, so hopefully it'll realistically actually be a project we don't scrap.
(edit) Oh it'll also be used for deployment/redeployment issue button as well. And there's a small chance it may be used for migrations with and without data since we haven't had great luck with using SolusVM's API for that smoothly. But hey maybe our implementation will be even worse than theirs! And it could also end up being used for IPv4 changes when none available on existing node.
Comments
Oh by the way for anyone interested, it was exactly this. I wish I wrote out my long version of the message so I could sound smarter in hindsight, but when I was doing the testing, there were two IP addresses I saw for routers, one router was the one that worked. One was the one that didn't work and any time that one got picked, it'd cause the issue.
I mentioned this in my first message to them with the exact IP addresses of the routers, I guess it just took a while for them to go through their checklist and then get there eventually.
One of the routers wasn't set up to actually process anything for our announcements, the other one was.
But of course first they asked for MTR, add a long pause between all these, then they asked us why the MTR was cut off (because it wasn't cut off, it was the full MTR and that's where it cut off) and then why it was working to the switch (we never said it didn't work tot he switch, just that the announcements were having issues when going through those two routers.)
Right now we're going through and manually fixing a bunch of issues caused by SolusVM having no proper system to use subnets by IDs but rather it goes based off what IP address gets sorted arbitrarily based on maybe the first octet being greater than the other. Technically our fault for not following their logic exactly, but anyway this means the ones that got the wrong IP address won't work. So if yours doesn't work, it's wrong, and you'll get another IP change basically. It's a lot more annoying and complicated than that on our end though to correct it and I can't really explain it very well. Just imagine if we have to empty one out to fill the other so it's double the work and will take some time.
Well you get migrated to NYC ? i tried 3 time raising a ticket and they all got denied and i cannot figure out why
That kinda sad because i am pretty sure NYC will be faster for me.
DeviousDev
san joseeeeeeeeee
anytime now
I bench YABS 24/7/365 unless it's a leap year.
@VirMach
Hi, Node LAX2Z019 has new IP, but still can not access, it gives the error messgae
"Operation Timed Out After 90001 Milliseconds With 0 Bytes Received"
I have writen ticket to describe this issue, but it closed by system autamatically with the message "The issue has been confirmed and
is being worked on" I checked the network status page, which shows the node is online. But I indeed can not use it
Is there anything wrong, why god won't bless me
Because you're not @Jab
You sure you did?
If you got denied first time then there is no reason to open more tickets, it will be all denied and you will even get more flags on your account. Looking at this behavior I assume you didn't read the fine print that it's only for people that behaved nicely in past - for example people that did not open tickets on nodes that are clearly listed in Network Status - like VirMach said (paraphrasing) if you didn't bother to read status you won't get extra/special treatment - you gonna wait like rest. You managed to open 3 tickets about this [while posting here] - I am like 89.89% sure you had moar tickets in the past and this is why you are no go / account flagged for not having 'extra/special' treatment :-)
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Got notification two of my VMs will have new IP on 22nd September.
https://microlxc.net/
@Jab
Oh, sorry. It's embarrassing, I checked on the status page https://statusbot.virm.ac/en/ because I bookmarked it, which said this node is online. I forgot to check out knowledge base.
Thank you.
I was hoping to hit around the non-earlybird pricing from sale on these, around $120/yr for a Storage -4T
WHAT IS THIS, THIS IS NOT 7$ PER YEAR.
You are shame to this community
/s
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
You gotta pump those numbers down. Those are rookie numbers in this racket.
Fast as fuck Core i9 VPS (aff) | Powerful AMD Ryzen VPS (aff)
Seattle pleaseeeeee!
I work at https://osd.vn. I work for https://thienphat.com.vn and https://luatsuanviet.com
Our site has been down for WEEKS. Run from these guys. Nobody could use their resources in a business unless you're just eager to go bankrupt.
Why are you running a business critical site without HA? No one should have trusted you to run infrastructure in the first place lol. All my sites and services are still up and working fine, be mentally strong and build your infra to be resilient to VM downtime.
I'm calling BS on this one.
I am currently traveling in mostly remote areas until sometime in April 2024. Consequently DM's sent to me will go unanswered during this time.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
You claim to be "doing this for 30 years" but you still use a single server for websites with no backups? Sounds like a you problem.
Hey, NerdUno's back! Always a ray of sunshine.
@VirMach Hey, NYC network with Royal BV is good! At least from my end.
Haha...
https://microlxc.net/
LAXA025 I guess IP4 still have no propagated? I have ip in range 66.59.196.x
It's in a weird state where we're planning for IPv4 changes at the same time so you'll be assigned new IPv4 but still waiting on QN networking team for them to officially go up. Not that many people affected so we've been dealing with it per-person basis in tickets when it's reported for that, if we end up migrating more than maybe 50 people there with these new IPv4 we'll add in network status page. We just don't want to clutter an already-cluttered page as much as possible.
Your situation isn't the one described, it was leftover 20-30% of people in Denver finally moved. The IPv4 will change, to a non-functioning one. So you're one step before that.
(edit) For Denver ones it's possible we give you 47.87.xx.xx and then send IP change notice.
(@Virmach) That's my ATL (nameserver) just gone down - node ATL-Z010. It has been fine up until now.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
First issue was Dedipath dropped announcements I think, for the two blocks. New issue could be carrier cut off something since the blend I think was Flex + Lumen and we don't know the ratio, but honestly with Flex, I wouldn't be surprised if they assured us we're good until the transition and then just cut off power.
So far it's been 5 days since we signed on with them I think, maybe 6 days, and the sales guy stopped replying, the equipment retrieval guy stopped replying, and their onboarding team wants until Thursday to even begin a call for anything. Can't say I'm surprised but at the same time I had some hope it'd actually be the 5 days setup they said for that location.
Flexential. has been a headache to deliver the hardware to us, Jhon never answered our messages, we only got a response from Arianna from sales who kindly helped us to recover the hardware, although we lost our switch, since they did not deliver this until now.
George Datacenter LLC
www.georgedatacenter.com
Owner Hardware
Thanks. I might move ns4 to another location, or just have some patience.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Today's entire focus will try to be finally actually getting our new space in LAX QuadraNet up and running by midnight or so (18 hours) but that will likely have to change if we get a bunch of tickets for Atlanta. So I'm hoping we've already provided ample alarms and information over the last few weeks where people realize there's less than 100% chance anything at Dedipath will stay up.
We'll also try to resume Seattle migrations to LAX, shifting back to loading in all of Seattle. I think it's reached a point where we still don't have a solid timeline after 2 weeks, that was my internal deadline for trying to get that to work 100% for that, so we might now try to shift to deploying from backups (again) in LAX, and just move forward with Seattle in the next few months instead if possible to keep that location. Unless this all changes and we actually get any actual solid date from all parties.
LOL.
Sad state of affairs though: keep at it Virmach. Some of us are still a rootin' for ya.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I realize DALZ004 is pretty far down the priority list, so I'm just throwing this out there...I've got backups of my data and would be fine with a wipe and rebuild if that saves you any hassle or time, rather that trying to migrate the existing image or recovering from your corrupted backups.
I suspect others might be, too.
Doesn't end up saving time because we can't coordinate it well right now and our script to do that is unreliable and clunky right now, but we could consider it maybe after today if we have time to set it up to where it's solid.
I've been wanting to talk about this thing we've been working on, sadly I don't think it will be ready in time for any of this, or maybe ever, but basically it's really cool and we want to auto detect when there's a possible outage and let people auto redeploy onto an available nearby node, and then load in from backups automatically if they're available. We have most the parts working individually right now and just need to weld it together (the hardest part.) The fact that even one of these parts isn't working out too well right now though, yeah, might be a while.
But once it's done it will also be used as a loading in backup feature (we won't load in backups directly, it'll use same redeployment technique first) kind of like a bootleg terrible snapshot loaded in, that'd likely be a paid feature though to help fund more reliable offsite additional backups so we're better immediately prepared for any company going out of business. The redeployment also helps avoid problems where let's say a node is having disk problems or other issues since it's down if it gets used for the feature in the paragraph above.
And it'd also be used as a general paid "I want another service right now but temporarily" where it loads up the second service in a new tab on your service details page, or moves the current one off to a second tab and keeps the new one on the first and it can be switched back and forth with an expiry time for the one you don't have marked as primary. It'd be mainly paid so we don't have to worry about people abusing it but also I guess to make money is a good reason too for introducing a new feature, that's just not the primary goal though and I'd make it free if it didn't definitely end up requiring providing additional support time. This would be the same end feature that's available for free but only if a node is detected as having issues or offline (later on it will also include other issues like overloading, disk issues, and so on.)
Everything I explained will be one "thing" that's just repurposed for all the above, it'll also be used for future mass moves most likely since the system can handle all parts of the process better. This actually started getting some traction as a project when we were originally doing backups for potential Dedipath out of business scenario August 1st or so, so hopefully it'll realistically actually be a project we don't scrap.
(edit) Oh it'll also be used for deployment/redeployment issue button as well. And there's a small chance it may be used for migrations with and without data since we haven't had great luck with using SolusVM's API for that smoothly. But hey maybe our implementation will be even worse than theirs! And it could also end up being used for IPv4 changes when none available on existing node.
I see datacenter shenanigans back to normal by Flexential and first day were just nice on purpose to save clients then back to fuck you.
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png