@tomsm said:
ny 022 is not work Offline PhysicalValuable-VM id:688689
Click on also can not be normal
This is the #1 issue and #1 reason we get tickets recently. People have an old ISO mounted, probably forget about it, after it gets rebooted a long time later, especially if we updated the ISO since then, SolusVM will not unmount it automatically and instead completely refuses to boot the VM back online without providing proper error on clientside.
I made a knowledgebase article for it already but all you need to know is you have to unmount ISO and then boot.
Make script to automatically unmount ISO after 48 hours.
Should've just listened to you. We made a script tied to the reboot button that unmounts and also added a note when boot button is used. It turns out people who don't unmount ISO also don't use the reset button or read before making a ticket.
@somik said: Well, in his defense, he is not wrong. You should have a way to find tickets older then X days and prioritize them OR reply to them OR some way to let them know what to do.
His ticket was definitely in the queue, and skipped on purpose. He already knew he used the IPv6 button twice. We're not going to focus on prioritizing answering tickets that request urgent IPv6. Because once it's answered it's guaranteed to turn into an argument or another ticket being created if someone is already resorting to ignoring any procedure. In fact I'm pretty sure something like that already occurred so he was marked accordingly. This helps us get to the right tickets on time.
His ticket did also receive an AI response that correctly told him we don't officially offer IPv6, IIRC. If an AI can find this information because you didn't even type in "IPv6" into the knowledgebase then the ticket is not guaranteed to receive any further response, especially on a custom ticket.
@somik said: OR charge them PER ticket opened/replied. Basically some way to reduce the number of tickets pending replies for over X days.
Customers have the option to purchase the correct support level if they want responses to everything, even if it's in the knowledgebase and we'll go over everything with them. So your suggestion is already effectively implemented if that's the level of support someone requires.
Oh also I didn't really announce this but this is pretty cool, everyone should know it exists now. We hooked up our helpdesk chat bot on the website to GPT as well, and it will be extremely helpful in answering questions for you if you're just quickly trying to have a policy or content in a knowledgebase article or information our website presented to you. Even works pretty well for being able to quickly figure things out for your operating system although for that you can just use ChatGPT directly.
Some examples, these are from tickets I reviewed/answered today, which it answered correctly for those most common situations that the customers were facing.
@tomsm said:
ny 022 is not work Offline PhysicalValuable-VM id:688689
Click on also can not be normal
This is the #1 issue and #1 reason we get tickets recently. People have an old ISO mounted, probably forget about it, after it gets rebooted a long time later, especially if we updated the ISO since then, SolusVM will not unmount it automatically and instead completely refuses to boot the VM back online without providing proper error on clientside.
I made a knowledgebase article for it already but all you need to know is you have to unmount ISO and then boot.
Make script to automatically unmount ISO after 48 hours.
Should've just listened to you. We made a script tied to the reboot button that unmounts and also added a note when boot button is used. It turns out people who don't unmount ISO also don't use the reset button or read before making a ticket.
Disallow opening tickets until 12 hours after (1) the reset button has been pressed and (2) VNC connection has been established and lasted at minimum of 300 seconds.
@tomsm said:
ny 022 is not work Offline PhysicalValuable-VM id:688689
Click on also can not be normal
This is the #1 issue and #1 reason we get tickets recently. People have an old ISO mounted, probably forget about it, after it gets rebooted a long time later, especially if we updated the ISO since then, SolusVM will not unmount it automatically and instead completely refuses to boot the VM back online without providing proper error on clientside.
I made a knowledgebase article for it already but all you need to know is you have to unmount ISO and then boot.
Make script to automatically unmount ISO after 48 hours.
Should've just listened to you. We made a script tied to the reboot button that unmounts and also added a note when boot button is used. It turns out people who don't unmount ISO also don't use the reset button or read before making a ticket.
Disallow opening tickets until 12 hours after (1) the reset button has been pressed and (2) VNC connection has been established and lasted at minimum of 300 seconds.
That's just stupid. If a VPS isn't working, people will first try to reset it, followed by opening a support ticket. Since they already have bots replying to the tickets, it makes more sense to just feed the bot answers to "VPS not working" questions with unmount ISO and such.
@tomsm said:
ny 022 is not work Offline PhysicalValuable-VM id:688689
Click on also can not be normal
This is the #1 issue and #1 reason we get tickets recently. People have an old ISO mounted, probably forget about it, after it gets rebooted a long time later, especially if we updated the ISO since then, SolusVM will not unmount it automatically and instead completely refuses to boot the VM back online without providing proper error on clientside.
I made a knowledgebase article for it already but all you need to know is you have to unmount ISO and then boot.
Make script to automatically unmount ISO after 48 hours.
Should've just listened to you. We made a script tied to the reboot button that unmounts and also added a note when boot button is used. It turns out people who don't unmount ISO also don't use the reset button or read before making a ticket.
Wont that cause issues with people using Alpine linux which requires the ISO to boot if you use their minimal installation?
@tomsm said:
ny 022 is not work Offline PhysicalValuable-VM id:688689
Click on also can not be normal
This is the #1 issue and #1 reason we get tickets recently. People have an old ISO mounted, probably forget about it, after it gets rebooted a long time later, especially if we updated the ISO since then, SolusVM will not unmount it automatically and instead completely refuses to boot the VM back online without providing proper error on clientside.
I made a knowledgebase article for it already but all you need to know is you have to unmount ISO and then boot.
Make script to automatically unmount ISO after 48 hours.
Should've just listened to you. We made a script tied to the reboot button that unmounts and also added a note when boot button is used. It turns out people who don't unmount ISO also don't use the reset button or read before making a ticket.
Wont that cause issues with people using Alpine linux which requires the ISO to boot if you use their minimal installation?
Anybody running Alpine will have enough clues to figure it out on their own. That's not this crew.
Summary:
This document serves as the Reason for Outage (RFO) to provide a comprehensive explanation of the network interruption that occurred on July 28th, 2023. The purpose of this RFO is to outline the root cause, impact, actions taken for resolution, and preventative measures to avoid similar incidents in the future.
Incident Overview:
On July 28th, 2023, at approx. 6PM EST a network interruption was experienced in San Jose, Redondo Beach, and Secaucus. This interruption resulted in a partial or complete loss of network connectivity and services for affected services. The incident was resolved at on July 28th, 2023 approx. 8:25PM with the exception of customers that were affected by a switch failure in Secaucus.
Root Cause Analysis:
After a thorough investigation, the root cause of the network interruption has been identified as follows:
Issue with common upstream provider we have in all three locations.
An unrelated switch failure that occurred in Secaucus, which affected a small portion of customer in New Jersey.
Impact:
The network interruption had the following significant impact:
Downtime: Affected systems experienced a loss of connectivity, leading to disruptions in services.
Actions Taken:
Upon identifying the root cause, the following actions were taken to resolve the network interruption:
Immediate Notification to Upstream Provider: The network team promptly contacted upstream providers to resolve the upstream issue.
Testing and Verification: After the upstream provider implemented the fix, DediPath performed thorough testing and verification were performed to ensure the stability and functionality of the network.
Since the incident was resolved we have opened discussions with other providers to prevent this situation from occurring again.
Conclusion:
The network interruption experienced on July 28th was a significant incident that impacted the organization and its users. Through a comprehensive root cause analysis and implementation of preventative measures, we aim to strengthen the network's resilience and provide a more robust and reliable service to our customers. We apologize for any inconvenience caused and remain committed to continually improving our network infrastructure.
If you have any further questions or concerns, please do not hesitate to reach out to our support team at [email protected]
Just wanted to give a shout out to Virmach.
After almost 6 months, I have finally gotten my Stromonic Refugee VPS (NVMe 1G), that too in Tokyo.
Since my Stromonic VPS billing had expired in June, I was expecting just a price match at this point of time - but Virmach has included some free months in the plan!
@lesuser said:
Reason for Outage
Date: July 31, 2023
Summary:
This document serves as the Reason for Outage (RFO) to provide a comprehensive explanation of the network interruption that occurred on July 28th, 2023. The purpose of this RFO is to outline the root cause, impact, actions taken for resolution, and preventative measures to avoid similar incidents in the future.
Incident Overview:
On July 28th, 2023, at approx. 6PM EST a network interruption was experienced in San Jose, Redondo Beach, and Secaucus. This interruption resulted in a partial or complete loss of network connectivity and services for affected services. The incident was resolved at on July 28th, 2023 approx. 8:25PM with the exception of customers that were affected by a switch failure in Secaucus.
Root Cause Analysis:
After a thorough investigation, the root cause of the network interruption has been identified as follows:
Issue with common upstream provider we have in all three locations.
An unrelated switch failure that occurred in Secaucus, which affected a small portion of customer in New Jersey.
Impact:
The network interruption had the following significant impact:
Downtime: Affected systems experienced a loss of connectivity, leading to disruptions in services.
Actions Taken:
Upon identifying the root cause, the following actions were taken to resolve the network interruption:
Immediate Notification to Upstream Provider: The network team promptly contacted upstream providers to resolve the upstream issue.
Testing and Verification: After the upstream provider implemented the fix, DediPath performed thorough testing and verification were performed to ensure the stability and functionality of the network.
Since the incident was resolved we have opened discussions with other providers to prevent this situation from occurring again.
Conclusion:
The network interruption experienced on July 28th was a significant incident that impacted the organization and its users. Through a comprehensive root cause analysis and implementation of preventative measures, we aim to strengthen the network's resilience and provide a more robust and reliable service to our customers. We apologize for any inconvenience caused and remain committed to continually improving our network infrastructure.
If you have any further questions or concerns, please do not hesitate to reach out to our support team at [email protected]
Sincerely,
DediPath
I don't get it. this is a whole long yadayada that means nothing. i hate 'corporate speak'
Fuck this 24/7 internet spew of trivia and celebrity bullshit.
@Encoders said: I don't get it. this is a whole long yadayada that means nothing. i hate 'corporate speak'
They admitted they had a problem, explained the problem and are saying they are committed to trying to prevent these types of things in the future, what more did you want?
@Encoders said: I don't get it. this is a whole long yadayada that means nothing. i hate 'corporate speak'
They admitted they had a problem, explained the problem and are saying they are committed to trying to prevent these types of things in the future, what more did you want?
The reason for the outage, which the RFO doesn't really answer.
@lesuser said: [Quoting Dedipath] Issue with common upstream provider we have in all three locations.
An unrelated switch failure that occurred in Secaucus, which affected a small portion of customer in New Jersey.
(I recognise that the language may be hard for some and it isn't exactly brimming with information.)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@Encoders said: I don't get it. this is a whole long yadayada that means nothing. i hate 'corporate speak'
They admitted they had a problem, explained the problem and are saying they are committed to trying to prevent these types of things in the future, what more did you want?
Well they took a page and a half to say, "Our upstream messed up. We called them. They fixed it." but never what it was nor how they're going to try to make it not happen again only that they're having "discussions with other providers".
@AlwaysSkint said:
^ Agreed, regards the what but to be fair, they can't really elaborate, on [any?] discussions with other providers.
If they'd told us the "what" those discussions might not be as important to know since if it was a technical issue then they'd be able to discuss it.....
my NY vps is down AGAIN. I can't ssh. I can't open my service page in virmach dashboard. I can't reboot, I can't do a damn thing
what an agony it's been with you guys...
Peace on earth will come to stay, when we all live as LESbians every day.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
@FrankZ said: @cyforex - NYCB043 is down, NYCB028 is overloading. Does ether of these sound familiar ?
yes it is NYCB028.
I was able to open my service page for a minute, it's down again though.
Should be better now.
Peace on earth will come to stay, when we all live as LESbians every day.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Peace on earth will come to stay, when we all live as LESbians every day.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Comments
Should've just listened to you. We made a script tied to the reboot button that unmounts and also added a note when boot button is used. It turns out people who don't unmount ISO also don't use the reset button or read before making a ticket.
His ticket was definitely in the queue, and skipped on purpose. He already knew he used the IPv6 button twice. We're not going to focus on prioritizing answering tickets that request urgent IPv6. Because once it's answered it's guaranteed to turn into an argument or another ticket being created if someone is already resorting to ignoring any procedure. In fact I'm pretty sure something like that already occurred so he was marked accordingly. This helps us get to the right tickets on time.
His ticket did also receive an AI response that correctly told him we don't officially offer IPv6, IIRC. If an AI can find this information because you didn't even type in "IPv6" into the knowledgebase then the ticket is not guaranteed to receive any further response, especially on a custom ticket.
Customers have the option to purchase the correct support level if they want responses to everything, even if it's in the knowledgebase and we'll go over everything with them. So your suggestion is already effectively implemented if that's the level of support someone requires.
Oh also I didn't really announce this but this is pretty cool, everyone should know it exists now. We hooked up our helpdesk chat bot on the website to GPT as well, and it will be extremely helpful in answering questions for you if you're just quickly trying to have a policy or content in a knowledgebase article or information our website presented to you. Even works pretty well for being able to quickly figure things out for your operating system although for that you can just use ChatGPT directly.
Some examples, these are from tickets I reviewed/answered today, which it answered correctly for those most common situations that the customers were facing.
Disallow opening tickets until 12 hours after (1) the reset button has been pressed and (2) VNC connection has been established and lasted at minimum of 300 seconds.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
That's just stupid. If a VPS isn't working, people will first try to reset it, followed by opening a support ticket. Since they already have bots replying to the tickets, it makes more sense to just feed the bot answers to "VPS not working" questions with unmount ISO and such.
Websites have ads, I have ad-blocker.
Wont that cause issues with people using Alpine linux which requires the ISO to boot if you use their minimal installation?
Websites have ads, I have ad-blocker.
Anybody running Alpine will have enough clues to figure it out on their own. That's not this crew.
True. And they can just do the full installation and boot from disk, even though that will lower their disk space.
Websites have ads, I have ad-blocker.
Reason for Outage
Date: July 31, 2023
Summary:
This document serves as the Reason for Outage (RFO) to provide a comprehensive explanation of the network interruption that occurred on July 28th, 2023. The purpose of this RFO is to outline the root cause, impact, actions taken for resolution, and preventative measures to avoid similar incidents in the future.
Incident Overview:
On July 28th, 2023, at approx. 6PM EST a network interruption was experienced in San Jose, Redondo Beach, and Secaucus. This interruption resulted in a partial or complete loss of network connectivity and services for affected services. The incident was resolved at on July 28th, 2023 approx. 8:25PM with the exception of customers that were affected by a switch failure in Secaucus.
Root Cause Analysis:
After a thorough investigation, the root cause of the network interruption has been identified as follows:
Issue with common upstream provider we have in all three locations.
An unrelated switch failure that occurred in Secaucus, which affected a small portion of customer in New Jersey.
Impact:
The network interruption had the following significant impact:
Downtime: Affected systems experienced a loss of connectivity, leading to disruptions in services.
Actions Taken:
Upon identifying the root cause, the following actions were taken to resolve the network interruption:
Immediate Notification to Upstream Provider: The network team promptly contacted upstream providers to resolve the upstream issue.
Testing and Verification: After the upstream provider implemented the fix, DediPath performed thorough testing and verification were performed to ensure the stability and functionality of the network.
Since the incident was resolved we have opened discussions with other providers to prevent this situation from occurring again.
Conclusion:
The network interruption experienced on July 28th was a significant incident that impacted the organization and its users. Through a comprehensive root cause analysis and implementation of preventative measures, we aim to strengthen the network's resilience and provide a more robust and reliable service to our customers. We apologize for any inconvenience caused and remain committed to continually improving our network infrastructure.
If you have any further questions or concerns, please do not hesitate to reach out to our support team at [email protected]
Sincerely,
DediPath
Fast as fuck Core i9 VPS (aff) | Powerful AMD Ryzen VPS (aff)
Just wanted to give a shout out to Virmach.
After almost 6 months, I have finally gotten my Stromonic Refugee VPS (NVMe 1G), that too in Tokyo.
Since my Stromonic VPS billing had expired in June, I was expecting just a price match at this point of time - but Virmach has included some free months in the plan!
Thanks again, @VirMach !
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals | VirMach Flash Sales Notifier
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
I don't get it. this is a whole long yadayada that means nothing. i hate 'corporate speak'
Fuck this 24/7 internet spew of trivia and celebrity bullshit.
They admitted they had a problem, explained the problem and are saying they are committed to trying to prevent these types of things in the future, what more did you want?
The reason for the outage, which the RFO doesn't really answer.
I am a representative of Advin Servers
(I recognise that the language may be hard for some and it isn't exactly brimming with information.)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Well they took a page and a half to say, "Our upstream messed up. We called them. They fixed it." but never what it was nor how they're going to try to make it not happen again only that they're having "discussions with other providers".
^ Agreed, regards the what but to be fair, they can't really elaborate, on [any?] discussions with other providers.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
If they'd told us the "what" those discussions might not be as important to know since if it was a technical issue then they'd be able to discuss it.....
Logged in, since I was on the hunt for an idle machine.
The new Panel, haaayaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
The CSS was bugging through the footer.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
NYCB028 - Operation Timed Out After 90000 Milliseconds With 0 Bytes Received
NYCB028 is up for me. Last outage was reported in July 28.
Or that's just control panel issue not actual server outage report?
my NY vps is down AGAIN. I can't ssh. I can't open my service page in virmach dashboard. I can't reboot, I can't do a damn thing
what an agony it's been with you guys...
What node?
I don't have the node number. I can't access my service page...
@cyforex - NYCB043 is down, NYCB028 is overloading. Does ether of these sound familiar ?
Peace on earth will come to stay, when we all live as LESbians every day.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
yes it is NYCB028.
I was able to open my service page for a minute, it's down again though.
Should be better now.
Peace on earth will come to stay, when we all live as LESbians every day.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
NYCB027 has been unavailable for over 100 days. New record?
Dataplane.org's current server hosting provider list
Peace on earth will come to stay, when we all live as LESbians every day.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
↑↑↑↑↑↑
Wrong vehicle above, here it goes the correct one
Ontario Dildo Inspector
Yeah cannot access
NYCB043
onhttps://billing.virmach.com
although VPS is working fine.Fast as fuck Core i9 VPS (aff) | Powerful AMD Ryzen VPS (aff)