I can confirm that both uptime robot and hetrix tools were not affected by Cloudflare outage. They were successfully visited monitored websites. The websites that did not use CF's services were operating normally. The websites that used CF's proxy and/or dns services were affected.
Comments
Hetrix uses CF, so your statement is not correct.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
webservers logs show that both uptime robot's and hetrix' bots were visiting the websites during the outage
We (HetrixTools) were partially affected, as in our main website uses CloudFlare, and had suffered some downtime. The server monitoring api endpoint is also behind CloudFlare, so a lot of our users' server monitoring agents were unable to resolve the hostname where to send data to; this one was the most impacted, as different servers and networks have different caching times, so some of them weren't able to resolve the hostname even after CF had recovered.
However, our backend downtime detection system (including all of our monitoring locations) was working as expected and had detected quite a lot of downtimes. In fact it detected so many downtimes that it triggered internal fail-safes built in to prevent things from going crazy.
It's not an ideal scenario that's for sure. I'd classify this as an exceptional case, as CF has become that big that it can literally bring down the Internet with some bad routes, as it did last night.
Can confirm the world did not end as predicted when Cloudflare fell.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Cloudflare is cancer, and it's spreading. Fast.
Appreciate the great response! I've personally found a two pronged approach with the server monitoring to be most reliable. Setup a ping/port and add the server agent. Only send alerts on 2+ locations failing for longer than 5 minutes. Then setup alerts for cpu, ram, storage usage just in case things go wacky.
However there are still some systems that that are fully locked down behind firewall/Nat so only way to know if they are down is via the agent talking out. For those CF being down is bad.
What's the rational behind using CF for that? Is it just DNS? I've found CF to not be the most reliable DNS out there personally. Frankly even VULTR's free dns is more reliable the past 12 months
It's mostly the array of services they they offer as a whole: for instance Argo which significantly improves the api endpoint time; or their Load Balancing which allows us to have any number of receiving/processing nodes behind it; or the fact that they block attacks on our endpoints almost on a daily basis, that's a lot of traffic that doesn't even reach our network; or the fact that we've integrated all of our nodes to have a shared cloud firewall where scripts can block/challenge IPs globally, and the list can go on and on...
I'm not saying CloudFlare is perfect, not by far, but I also can't agree with people that avoid it like the plague. I guess it's just a matter of opinion... there's never a "one size fits all" kind of service, so you've just got to see if it works for your project or not.
That makes a lot of sense! Thanks for taking the time to expand on my question.
>
There's a time and place for it, for sure. If you're blocking regular attacks, then that's worth using to minimize downtime and whatever outages CF have are much lower than what you'd face without that protection. I won't hold it against service providers who get smacked all of the time though - or need some other features to improve their service.
I've been "REEEEEEEEEEEEE"-ing about CF since Friday though just due to the sheer volume of sites/services using them, and how their outage broke so much it made me check to make sure my firewall didn't blow up /see if my ISP was having issues.
I think the level of centralization around Cloudflare is really bad for the internet.
I'd like to see people, especially the nerds that hang around forums like this, really evaluating whether they NEED to use Cloudflare. I've only had 1 site hit with a small Layer 7 before, definitely don't need 24x7 DDoS mitigation - yet I had a bunch of sites sitting behind CF for no good reason.
So this latest outage just caused me to update my personal list of rules for how I operate in life. "No cloudflare" is now in there, right next to "never buy a server without native IPv6".
π¦π
What's wrong with a server completely without ipv6 support?
Coming up on a decade since World IPv6 launch day and we've run out of v4 address space. I like direct connecting to things and not stuffing them behind NAT, reverse proxies, port forwarding, etc - or paying out the ass for additional v4 ($2-3/mo/IP in a /29 or /28 at a time is not fun).
I also run a lot of v6-only infrastructure that I need my other servers to be able to talk to (backup servers, ipam, site-to-site VPN, some monitoring tools, etc don't require v4). I do have a bit of NATv4 outbound on some of those servers/VMs though for talking with v4-only stuff like github.
π¦π
Oh, ipv6-only is for the brave hearts. A decade after ipv6 launch day and still nowhere near of completion. https://stats.labs.apnic.net/IPv6/XA
With such a pace this technology might be completely replaced with ipv7 or ipv8 in the not so distant future
I think the ipv6 authors/authorities made it way too complex and clumsy. If they came out with something like ipv5 with adding just 1 byte to the 32bits address from some unnecessary field within same 20 bytes ip header, or just adding a new ip header option with extended address bytes, then the world would have adopted it in an instant
You misspelled "broke", lol. Adding a /28 in some places I colo would cost more than the colo itself.
If we're going to throw out a couple decades of work for a new addressing standard (and even more delays, unsupported hardware, etc) we might as well go with IPv4+.
π¦π
The "unsupported hardware" made especially with milking purposes in the first place, I guess
I often wonder how could VirMach offer $4/year with IPv4 address.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury π£
CC was/is sitting on a mountain of spam listed v4 addresses - so Virmach using it for cheap VMs actually helps clean it up long term.
There's a lot of cheap v4 floating around right now since covid screwed up a lot of the cheap vps/proxy/vpn market that targeted Chinese customers. A chunk of the resellers went under so you have providers on the west coast with racks of gear and 10s of thousands of IPs not being used.
Expect some more Virmach and AlphaRacks-style deals in the coming weeks/months.
π¦π
The main selling point of CF is their +150 PoPs which translates into ~20ms TTFB when served from edge workers.
Hard to beat that latency, and price ($1/million req serverless).
I wonder if Constellix with their 16 PoPs and GeoDNS could match the TTFB latency.
The time to LAST byte of the request is way more important for me. I'd gladly spend 300ms in smart geodns choosing the nearest server then wait till CF finally serve my request. It can be more than 600ms with an excellent TTFB. With that many PoPs CF is shiny for dns though, except you don't mind once a year downtime in midsummer
Or @Francisco dedicated IP with shared hosting $8 a year
I own all of my IP addresses, Virmach doesn't.
I heard a lot of stories/etc of the deals CC would give to get people onboard, basically a free /24 or /23 with every E5.
Francisco
Man that's nuts and indeed a great deal although a lot of leg work to clean up those IPs.
In what year did you purchase your IPs if I may ask?
You used to be able to get them for basically nothing when ARIN still had IPv4 addresses
Just gotta pay the yearly service fees which is based on how many IPs you have.
π¦π
Itβs stupidly cheap even today, you just need to wait on the waiting list.