@skorous said: @Virmach: For machines that used to have more than one IP address but are now only allocated one is this a ticket occasion? It's not an issue for me as I wasn't currently using it but wanted to make sure there wasn't an automatic process which was going to follow up if I did nothing.
That shouldn't have happened using our scripts, it specifically adds in same number you used to have but it's possible in a rare case if it ran out that you only got one, in which case yeah you'd contact us. Just wait until a little bit later if possible.
@skorous said: @Virmach: For machines that used to have more than one IP address but are now only allocated one is this a ticket occasion? It's not an issue for me as I wasn't currently using it but wanted to make sure there wasn't an automatic process which was going to follow up if I did nothing.
That shouldn't have happened using our scripts, it specifically adds in same number you used to have but it's possible in a rare case if it ran out that you only got one, in which case yeah you'd contact us. Just wait until a little bit later if possible.
Rats, hoped you hadn't noticed me. As I edited the original post, I'm just dumb today. Sorry to bother.
FFME002.VIRM.AC seems to died like 15 minutes ago.
No ping on virmach status thing, no ping on my VM, panel not loading details - timeout.
VNC says Failed to connect to server (code: 1011, reason: Failed to connect to downstream server)
@Jab said:
FFME002.VIRM.AC seems to died like 15 minutes ago.
No ping on virmach status thing, no ping on my VM, panel not loading details - timeout.
VNC says Failed to connect to server (code: 1011, reason: Failed to connect to downstream server)
Uptime 20 minutes. My VM is back and working. If that was disk that wasn't mine disk and I am very sorry for rest of you!
Thanks VirMach team.
It'll always have a bump up like that, when it reboots since it's booting up like probably 100+ entire operating systems. The actual crash is before that bump. It was at only 14 load, 49% CPU, 82GB active memory. I haven't looked into it yet but I figured I'd just reboot it since it didn't look like any major errors.
Okay I just finished racking Seattle replacement servers in LAX. Warning -- graphic photo, kind of... Just putting spoiler tags for anyone who doesn't want to see it, click at your own risk. I forgot to take an actual photo after I was done, but it's pretty funny this happened in like the first 5 minutes there.
@VirMach said:
Okay I just finished racking Seattle replacement servers in LAX. Warning -- graphic photo, kind of... Just putting spoiler tags for anyone who doesn't want to see it, click at your own risk. I forgot to take an actual photo after I was done, but it's pretty funny this happened in like the first 5 minutes there.
@VirMach said:
Okay I just finished racking Seattle replacement servers in LAX. Warning -- graphic photo, kind of... Just putting spoiler tags for anyone who doesn't want to see it, click at your own risk. I forgot to take an actual photo after I was done, but it's pretty funny this happened in like the first 5 minutes there.
No uplink though, waiting on DC to diagnose.
Take care boss!
This reminds me the time I managed my supervisor's HPC lab
@VirMach said:
Okay I just finished racking Seattle replacement servers in LAX. Warning -- graphic photo, kind of... Just putting spoiler tags for anyone who doesn't want to see it, click at your own risk. I forgot to take an actual photo after I was done, but it's pretty funny this happened in like the first 5 minutes there.
No uplink though, waiting on DC to diagnose.
@VirMach is literally giving his blood for his customers. Wait... was that a suicide attempt?
@nutjob said:
Does anyone know what the status of Dallas is? I've sort of lost track.
Doesn't exist as an option any more. For everything they had a backup of, it has already been moved to NYC and is functioning there (with new IP etc.). For a couple of nodes, the VM backups failed so they've done nothing with those ones until they get the actual node hardware returned from the DC, then presumably those ones will be moved to NYC as well - no timeline given for this.
@Virmach Any idea when Atlanta will be operational? I'm not sure what node I'm on but I've just realized that my VPS is down and not accessible through control panel. I read the status page but still don't fully understand, can I expect it to work at least by September 30/October 1? Will IP change again?
@nutjob said:
Does anyone know what the status of Dallas is? I've sort of lost track.
Doesn't exist as an option any more. For everything they had a backup of, it has already been moved to NYC and is functioning there (with new IP etc.). For a couple of nodes, the VM backups failed so they've done nothing with those ones until they get the actual node hardware returned from the DC, then presumably those ones will be moved to NYC as well - no timeline given for this.
@VirMach Is there an option to move Dallas servers to NYC without restoring a backup?
@nutjob said:
Does anyone know what the status of Dallas is? I've sort of lost track.
Doesn't exist as an option any more. For everything they had a backup of, it has already been moved to NYC and is functioning there (with new IP etc.). For a couple of nodes, the VM backups failed so they've done nothing with those ones until they get the actual node hardware returned from the DC, then presumably those ones will be moved to NYC as well - no timeline given for this.
Can we go to DC and grab the server out?
Have the honor of being the crybaby who pays $20 for a 128MB VPS at VirMach in 2023.
@VirMach said:
Okay I just finished racking Seattle replacement servers in LAX. Warning -- graphic photo, kind of... Just putting spoiler tags for anyone who doesn't want to see it, click at your own risk. I forgot to take an actual photo after I was done, but it's pretty funny this happened in like the first 5 minutes there.
Comments
That shouldn't have happened using our scripts, it specifically adds in same number you used to have but it's possible in a rare case if it ran out that you only got one, in which case yeah you'd contact us. Just wait until a little bit later if possible.
Rats, hoped you hadn't noticed me. As I edited the original post, I'm just dumb today. Sorry to bother.
FFME002.VIRM.AC seems to died like 15 minutes ago.
No ping on virmach status thing, no ping on my VM, panel not loading details - timeout.
VNC says Failed to connect to server (code: 1011, reason: Failed to connect to downstream server)
Sad times -:(
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Lalalalala can't hear you. Ughh
We back!
Uptime 20 minutes. My VM is back and working. If that was disk that wasn't mine disk and I am very sorry for rest of you!
Thanks VirMach team.
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
It'll always have a bump up like that, when it reboots since it's booting up like probably 100+ entire operating systems. The actual crash is before that bump. It was at only 14 load, 49% CPU, 82GB active memory. I haven't looked into it yet but I figured I'd just reboot it since it didn't look like any major errors.
Hello, when will the push order start processing?
Push orders can take from days to weeks depending on what is going on.
Push orders are the lowest priority ticket.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Okay I just finished racking Seattle replacement servers in LAX. Warning -- graphic photo, kind of... Just putting spoiler tags for anyone who doesn't want to see it, click at your own risk. I forgot to take an actual photo after I was done, but it's pretty funny this happened in like the first 5 minutes there.
No uplink though, waiting on DC to diagnose.
vir vir is busy
i like vir vir when he is busy
vir vir is making our vir's more u know virrrr
@VirMach i didn't see your hand before i posted last.
take care
and don't cut something else.
Take care
Ya see! We told ya not to force us from SEA to LAX. Jinxed, I tell ya, jinxed.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Take care boss!
This reminds me the time I managed my supervisor's HPC lab
@VirMach since HW is racked.
Can we expect a reincarnation of Seattle this week?
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
In Los Angeles yeah
take care , sir
@VirMach is literally giving his blood for his customers. Wait... was that a suicide attempt?
This is just virmach teasing us that the next sale will be bloody good.
Does anyone know what the status of Dallas is? I've sort of lost track.
Doesn't exist as an option any more. For everything they had a backup of, it has already been moved to NYC and is functioning there (with new IP etc.). For a couple of nodes, the VM backups failed so they've done nothing with those ones until they get the actual node hardware returned from the DC, then presumably those ones will be moved to NYC as well - no timeline given for this.
@Virmach how is Atlanta doing?
I hope Miami migration anytime soon xD
I'd prefer it going to NYC actually. Get decent DDOS protection.
@Virmach Any idea when Atlanta will be operational? I'm not sure what node I'm on but I've just realized that my VPS is down and not accessible through control panel. I read the status page but still don't fully understand, can I expect it to work at least by September 30/October 1? Will IP change again?
@VirMach Is there an option to move Dallas servers to NYC without restoring a backup?
Can we go to DC and grab the server out?
Have the honor of being the crybaby who pays $20 for a 128MB VPS at VirMach in 2023.
LAXA018 is down again.
Could you make more permanent solution to this node @VirMach?
Any progress on TYOC038?
Man that sucks!
Fast as fuck Core i9 VPS (aff) | Powerful AMD Ryzen VPS (aff)