@windytime said: @VirMach : When will the JP location available for order new VPS?
We're still working this out. We have about 20 servers left to send out and we're deciding how many to send where. Japan does have one server that's awaiting setup so it may go back in stock briefly but we have to make sure to lock everything down first which has delayed it.
This may come as a surprise to you, @VirMach, but we monitor the offerings and performance of dozens of providers worldwide so that our users can make informed decisions moving forward. The easy way would be to simply write your service off and move on. But we have a number of users that already use your platform so we'll hang in there, but I appreciate the refund offer. And hopefully your service will get over these speed bumps and improve in coming months. Best of luck.
xTom coming through again with ridiculously good support. I think within an hour of me firing off the LOA they have it all set up for Frankfurt and working on Amsterdam now (taking longer because of our initial big VLAN but being worked on now.)
I'll switch main IPs after validating and then make the status update so those locations you should be able to use the new IPs soon.
@lysdev said:
New IPs on MIAZ012 seem to be working. I've manually added mine and it's now accessible.
Yep, was just about to update everyone. Network status page updated.
Miami, QN LAX, QN Chicago, Frankfurt, and Amsterdam all ready. Amsterdam is still technically not accessible globally as it just got announced but should be good in the next hour or two. We haven't yet updated the main IPs but you can technically change it sooner.
LOA for Dedipath/INAP and Hivelocity have errors in them, my fault. I crammed in a /22 last minute at the wrong place and everything got offset in a way where these are technically not valid. I've already requested a correction.
Oh and Dedipath/INAP Phoenix was already done yesterday.
The IP changes have worked smoothly for me so far. It is good to have an overlap where both old & new are functional.
I have experienced much worse with other providers, e.g. hard switch with no notice, only informed of new IP after it is active, /etc/network/interfaces rewritten causing loss of wireguard interface, etc.
@tetech said:
The IP changes have worked smoothly for me so far. It is good to have an overlap where both old & new are functional.
I have experienced much worse with other providers, e.g. hard switch with no notice, only informed of new IP after it is active, /etc/network/interfaces rewritten causing loss of wireguard interface, etc.
I'm waiting to see what happens. Wasn't sure how smooth it was going to go so I converted several of my machines to use DHCP figuring when it happened I'd just reset the interface and it'd all just be there. Tables haven't updated yet though.
I am wondering if I need to have Syncthing Discovery (UDP Port 21027) enabled when I only explicitly want to use Syncthing on my LAN. It seems to be enabled by default.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@VirMach any i.p update as to NYCB033X is concerned?
@soulchief said:
I've got all my ip's updated and working except for NY and DAL. Still waiting for those ip's to work.
So around 48x /24 blocks or servers remaining. That's NYC, Seattle, Dallas, Atlanta, San Jose, Tampa, HV Chicago, and HV Los Angeles. The rest which is the majority have already completed.
Still waiting on the LOA situation to be rectified and really hoping it doesn't cause a delay past Thursday.
I have experienced much worse with other providers, e.g. hard switch with no notice, only informed of new IP after it is active, /etc/network/interfaces rewritten causing loss of wireguard interface, etc.
Hey that sounds like us in a difficult situation.
I hope the last part doesn't happen this time but it's technically a possibility. If your service is offline, not accepting ping, or the script errors out, it might try to reconfigure it again.
@AlwaysSkint said:
AMSD029 just worked smoothly for me and I re-established it as a nameserver.
Network broadcasters are at it already:
I am wondering if I need to have Syncthing Discovery (UDP Port 21027) enabled when I only explicitly want to use Syncthing on my LAN. It seems to be enabled by default.
Amsterdam is going to have it's VLANs split off after this is over. Most likely Friday or Monday.
@Eason said: @VirMach It's been rumored on loc recently. Will you start to migrate from F12 to JP for an additional $2 per month? is this real?
What a weirdly specific rumor. I'm trying to think if it's a misinterpretation of something I said but that's definitely not planned. I will say though that in the past our tools were never built for a location possibly costing more so if we continue the upcharge for Tokyo then we obviously would need to figure out how to do it officially since otherwise everyone would just buy one location then migrate to Tokyo to avoid the fee.
We've obviously already noticed that happening but that's technically fine to do right now but wouldn't be a permanent solution.
IMO my favorite thing to do would be to just have Tokyo at the same price as all the other locations if everyone would just behave and form an orderly queue but that's an unrealistic thing to want when it's clearly more popular. The other part is that Tokyo ends up being more tickets, more everything, even though NYC and LAX have more servers and customers.
But short answer: no, not true. Currently undetermined what we'll do.
But short answer: no, not true. Currently undetermined what we'll do.
Lmao. Don't you think it's a genius idea that MJJs come up in your stead?
Adding few bucks a month and you get all the Tokyo illegal immigrants, abusers etc. moving their asses out on their own.
Probably more efficient of a deterrent than previous options listed.
But short answer: no, not true. Currently undetermined what we'll do.
Lmao. Don't you think it's a genius idea that MJJs come up in your stead?
Adding few bucks a month and you get all the Tokyo illegal immigrants, abusers etc. moving their asses out on their own.
Probably more efficient of a deterrent than previous options listed.
Okay I started hearing back from people on storage. I'll be a little vague to maintain privacy.
One guy brought up how the server has gone down (he was one of the people that caused it to lock up once or twice, well one of the main pieces to it happening) in the ticket as if that changes anything in his favor, but he's essentially using it to do video processing. I explained to him that he should consider not using HDD storage for that, so we'll see if he agrees and expands his setup to maybe include an NVMe server to actually do the processing before he stores the large video files on the storage server.
Second guy said he uploads files at one location and downloads files at a second location. That's an interesting way to describe a Plex server that has 100-150 different people connected to it right now. Not that I care, it just means we can't really help him reduce the usage so I hope he's got it figured out.
Third guy said he's using it as a backup server with a script that's not optimized. Strange, he's doing a lot more reads than writes. The said script must be called "qBittorent." But again I hope he figures out how to fix his settings. I would have loved to help him out to reduce the usage but he's not interested either.
By the way I want to re-iterate that I am not in any way accessing the customer's service, just basic tools that show general outputs. Like if an IP is using port 80 then you know they're running a webserver, etc.
Jesus, these could be semi-OK if people just didn't use it as a dedicated CDN, video processing, and streaming site. Even the Plex guy would probably be fine if he wasn't literally mass file sharing on Plex. I guess no one's ever heard of use cases or efficiency.
Comments
We're still working this out. We have about 20 servers left to send out and we're deciding how many to send where. Japan does have one server that's awaiting setup so it may go back in stock briefly but we have to make sure to lock everything down first which has delayed it.
This may come as a surprise to you, @VirMach, but we monitor the offerings and performance of dozens of providers worldwide so that our users can make informed decisions moving forward. The easy way would be to simply write your service off and move on. But we have a number of users that already use your platform so we'll hang in there, but I appreciate the refund offer. And hopefully your service will get over these speed bumps and improve in coming months. Best of luck.
Sidenote: Never mind the quality, feel the width - said facetiously with approx. 40 years IT "experience".
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
xTom coming through again with ridiculously good support. I think within an hour of me firing off the LOA they have it all set up for Frankfurt and working on Amsterdam now (taking longer because of our initial big VLAN but being worked on now.)
I'll switch main IPs after validating and then make the status update so those locations you should be able to use the new IPs soon.
(Was talking about that earlier: 2 New Yorks, 2 Kansas' .. )
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
It seems that the thousandfold bandwidth issue caused by the migration has been fixed. Good job!
New IPs on MIAZ012 seem to be working. I've manually added mine and it's now accessible.
VirMach vs NerdUno drama, narrated for your listening entertainment.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
Looking forward to hearing the musical!
Head Janitor @ LES • About • Rules • Support
Yep, was just about to update everyone. Network status page updated.
Miami, QN LAX, QN Chicago, Frankfurt, and Amsterdam all ready. Amsterdam is still technically not accessible globally as it just got announced but should be good in the next hour or two. We haven't yet updated the main IPs but you can technically change it sooner.
LOA for Dedipath/INAP and Hivelocity have errors in them, my fault. I crammed in a /22 last minute at the wrong place and everything got offset in a way where these are technically not valid. I've already requested a correction.
Oh and Dedipath/INAP Phoenix was already done yesterday.
Hi... Welcome to page 100... Plenty of "refound" offers but none taken :-)
yes --- i did get my account credited. maybe others now follow
VIRRRRR VISHHHHH refund NOW -
The Bus Killer
The IP changes have worked smoothly for me so far. It is good to have an overlap where both old & new are functional.
I have experienced much worse with other providers, e.g. hard switch with no notice, only informed of new IP after it is active, /etc/network/interfaces rewritten causing loss of wireguard interface, etc.
I'm waiting to see what happens. Wasn't sure how smooth it was going to go so I converted several of my machines to use DHCP figuring when it happened I'd just reset the interface and it'd all just be there. Tables haven't updated yet though.
AMSD029 just worked smoothly for me and I re-established it as a nameserver.
Network broadcasters are at it already:
A syncthing, apparently.
1st page of DuckDuckGo..
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@VirMach any i.p update as to NYCB033X is concerned?
I've got all my ip's updated and working except for NY and DAL. Still waiting for those ip's to work.
So around 48x /24 blocks or servers remaining. That's NYC, Seattle, Dallas, Atlanta, San Jose, Tampa, HV Chicago, and HV Los Angeles. The rest which is the majority have already completed.
Still waiting on the LOA situation to be rectified and really hoping it doesn't cause a delay past Thursday.
Hey that sounds like us in a difficult situation.
I hope the last part doesn't happen this time but it's technically a possibility. If your service is offline, not accepting ping, or the script errors out, it might try to reconfigure it again.
Amsterdam is going to have it's VLANs split off after this is over. Most likely Friday or Monday.
@VirMach It's been rumored on loc recently. Will you start to migrate from F12 to JP for an additional $2 per month? is this real?
One CHI successfully changed - I'll leave for other to auto-migrate (for curiosity's sake).
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I clicked Main IP button in SolusVM.
vps1 is now online with new IP.
Checklist of where I had to update IP address:
$HOME/.ssh/config
/etc/netplan/01-netcfg.yaml
bind
directivesAccepting submissions for IPv6 less than /64 Hall of Incompetence.
What a weirdly specific rumor. I'm trying to think if it's a misinterpretation of something I said but that's definitely not planned. I will say though that in the past our tools were never built for a location possibly costing more so if we continue the upcharge for Tokyo then we obviously would need to figure out how to do it officially since otherwise everyone would just buy one location then migrate to Tokyo to avoid the fee.
We've obviously already noticed that happening but that's technically fine to do right now but wouldn't be a permanent solution.
IMO my favorite thing to do would be to just have Tokyo at the same price as all the other locations if everyone would just behave and form an orderly queue but that's an unrealistic thing to want when it's clearly more popular. The other part is that Tokyo ends up being more tickets, more everything, even though NYC and LAX have more servers and customers.
But short answer: no, not true. Currently undetermined what we'll do.
Tomorrow's headline: VirMach destroyed my billion dollar business
I ahead do ip switching for my VMs. Hopefully @VirMach 's will not switch them back
Virmach Deals
Lmao. Don't you think it's a genius idea that MJJs come up in your stead?
Adding few bucks a month and you get all the Tokyo illegal immigrants, abusers etc. moving their asses out on their own.
Probably more efficient of a deterrent than previous options listed.
Wow Page #100.
Congratz everyone.
https://microlxc.net/
100 Pages - free VPS?
Okay I started hearing back from people on storage. I'll be a little vague to maintain privacy.
One guy brought up how the server has gone down (he was one of the people that caused it to lock up once or twice, well one of the main pieces to it happening) in the ticket as if that changes anything in his favor, but he's essentially using it to do video processing. I explained to him that he should consider not using HDD storage for that, so we'll see if he agrees and expands his setup to maybe include an NVMe server to actually do the processing before he stores the large video files on the storage server.
Second guy said he uploads files at one location and downloads files at a second location. That's an interesting way to describe a Plex server that has 100-150 different people connected to it right now. Not that I care, it just means we can't really help him reduce the usage so I hope he's got it figured out.
Third guy said he's using it as a backup server with a script that's not optimized. Strange, he's doing a lot more reads than writes. The said script must be called "qBittorent." But again I hope he figures out how to fix his settings. I would have loved to help him out to reduce the usage but he's not interested either.
By the way I want to re-iterate that I am not in any way accessing the customer's service, just basic tools that show general outputs. Like if an IP is using port 80 then you know they're running a webserver, etc.
Jesus, these could be semi-OK if people just didn't use it as a dedicated CDN, video processing, and streaming site. Even the Plex guy would probably be fine if he wasn't literally mass file sharing on Plex. I guess no one's ever heard of use cases or efficiency.