Windows templates almost finished syncing, syncing rest now and then looking into nodes that still have template issues. They should mostly already be good Linux-wise.
and a fun thing is,this server is delivered to me a week ago,but uptime is 41days........
if you still thinking thats not your server,work order ticket #958329
and a fun thing is,this server is delivered to me a week ago,but uptime is 41days........
if you still thinking thats not your server,work order ticket #958329
You said Dedipath LA. We don't use Dedipath in LA. I was only replying to that part.
(edit) Sorry I thought you were talking about the QN LA network issue today we posted. Any network problem for the servers is going to be a longer wait right now as we're still working through them.
@taizi said: and a fun thing is,this server is delivered to me a week ago,but uptime is 41days........
Yeah they were ready for some time on some of them and not for others, and the older ones we had to go back and reorganize them because they didn't have all the information and by then we had more of them to fill. It's expected for some.
It's strange QN and INAP had the same problem at the same time, probably something upstream then. QN didn't find any issues directly on the network equipment and it cleared up. I'm taking a look at your ticket to see if that cleared up around the same time or not.
View Ticket #232870
Subject: assigned ip not working on storage server
If it's urgent make sure it's a priority department ticket. We're almost done catching up and completing all those by today/tomorrow.
Tyoc040 has been offline for a long time. Can I have a refund?
So many people are creating tickets in every different department with various titles and custom requests for TYOC040 and TYOC035. If we take them offline to try to fix it, more people will do the same. Once we catch up to all these tickets being created and I have enough time to set aside to try to have a more permanent solution, and we send out e-mails warning people of the maintenance window while I can get someone else to answer all the tickets that are going to be created, then we could work on it.
Until then it seems like many people on those nodes would rather have work orders about it than have it work, as we're getting 10-20x more tickets, confusion, ignoring of posted network status, etc, than when other nodes face a similar issue.
(edit) So to answer you, no.
TYOC040 is offline for 3 months, and there is no refund. Dirty VirMach!
View Ticket #232870
Subject: assigned ip not working on storage server
If it's urgent make sure it's a priority department ticket. We're almost done catching up and completing all those by today/tomorrow.
Tyoc040 has been offline for a long time. Can I have a refund?
So many people are creating tickets in every different department with various titles and custom requests for TYOC040 and TYOC035. If we take them offline to try to fix it, more people will do the same. Once we catch up to all these tickets being created and I have enough time to set aside to try to have a more permanent solution, and we send out e-mails warning people of the maintenance window while I can get someone else to answer all the tickets that are going to be created, then we could work on it.
Until then it seems like many people on those nodes would rather have work orders about it than have it work, as we're getting 10-20x more tickets, confusion, ignoring of posted network status, etc, than when other nodes face a similar issue.
(edit) So to answer you, no.
TYOC040 is offline for 3 months, and there is no refund. Dirty VirMach!
@Mamyyy said: idk but I've been looping ssh connect since the last hour without any sign of life.
Its 90001 milliseconds error in control panel.
You are correct that both the billing panel and SolusVM time out and steal is up to 50%. But I am connected right now, so it is having issues, but is not down. The top shown below is CDT
and I run a backup DNS server for that region on it and this is a graph of the response time to a DNS request from the four locations listed. Screenshot was taken two minutes ago .
EDIT: Maybe it is dropping VMs due to overloading and it just has not gotten to me yet.
EDIT2: OK, now I get no response as well. (11:35 CDT)
@foxcoo said:
TYOC002S offline, please check. I think someone is mining with it.Because I have noticed that the online CPU usage has now dropped significantly.
@FrankZ said: back, accessible via the billing panel, SolusVM, and ssh.
We're going to reach out to some top users especially around Saturday night and figure out how it's being used and how it could potentially be optimized. That probably means anyone who noticed it today might be one of the top users as that's how usually things go. Once we deal with that if it doesn't calm down then we need to evaluate all the rest. The server's fine, the disks are fine, the controller is fine, and it's online, but it keeps getting stuck in scenario every Saturday because of the way it's being utilized.
Tokyo essentially has lots and lots of sporadic bursting patterns that can amplify eachother to the point where it's temporarily toast until it catches back up.
Comments
I would say around 666 tickets in query.
https://microlxc.net/
It's 863.
Enjoy meditation without religion for one month.
If anyone got it let me know.
Windows templates almost finished syncing, syncing rest now and then looking into nodes that still have template issues. They should mostly already be good Linux-wise.
bruh
and a fun thing is,this server is delivered to me a week ago,but uptime is 41days........
if you still thinking thats not your server,work order ticket #958329
You said Dedipath LA. We don't use Dedipath in LA. I was only replying to that part.
(edit) Sorry I thought you were talking about the QN LA network issue today we posted. Any network problem for the servers is going to be a longer wait right now as we're still working through them.
@Wonder_Woman + @skorous / 2 = 911
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Yeah they were ready for some time on some of them and not for others, and the older ones we had to go back and reorganize them because they didn't have all the information and by then we had more of them to fill. It's expected for some.
It's strange QN and INAP had the same problem at the same time, probably something upstream then. QN didn't find any issues directly on the network equipment and it cleared up. I'm taking a look at your ticket to see if that cleared up around the same time or not.
Even if somebody (from US) had this number in their mind, they probably would've avoided mentioning it (for a fun quiz).
Call me a snob, whatever, but they shouldn't have a VPS in the 1st place, if unable to do something so basic.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
911 records found = priority Tickets?
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@VirMach
Current IP: 149.57.137.xxx (LA Quadranet)
Future IP: 47.87.136.xxx (San Mateo Alibaba inc).
Oh. i need to re-purchase my software license then.
This is the second time. Lol.
First migration and the second IP change.
https://microlxc.net/
What software is that? Shady AF, if they don't allow IP changes.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Take care.TYOC035 is the same.
Pl tag @ehab for any dirty comments or jokes.
———-
blog | exploring visually |
For the DumbAF above:
Be an ass=forfeited.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
FANTASTIC
Good day and Goodbye
You mean like this? Including putting actual documentation on the network status page? Have you checked HostLoc for people that got it fixed?
yes, any time
NYCB035 offline
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
TYOC002S offline, please check. I think someone is mining with it.Because I have noticed that the online CPU usage has now dropped significantly.
and it's back after 6 hours - not restarted as uptime is in days... seems like network died / nullrouted?
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Tokyo storage whatever the node name is is down
TYOC002S is not down, has no packet loss, but does have 30% CPU steal so it may not be very responsive.
EDIT: Graph added
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
idk but I've been looping ssh connect since the last hour without any sign of life.
Its 90001 milliseconds error in control panel.
You are correct that both the billing panel and SolusVM time out and steal is up to 50%. But I am connected right now, so it is having issues, but is not down. The top shown below is CDT
and I run a backup DNS server for that region on it and this is a graph of the response time to a DNS request from the four locations listed. Screenshot was taken two minutes ago .
EDIT: Maybe it is dropping VMs due to overloading and it just has not gotten to me yet.
EDIT2: OK, now I get no response as well. (11:35 CDT)
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
TYOC002S is back, accessible via the billing panel, SolusVM, and ssh.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
So close
We're going to reach out to some top users especially around Saturday night and figure out how it's being used and how it could potentially be optimized. That probably means anyone who noticed it today might be one of the top users as that's how usually things go. Once we deal with that if it doesn't calm down then we need to evaluate all the rest. The server's fine, the disks are fine, the controller is fine, and it's online, but it keeps getting stuck in scenario every Saturday because of the way it's being utilized.
Tokyo essentially has lots and lots of sporadic bursting patterns that can amplify eachother to the point where it's temporarily toast until it catches back up.