We Americans are consumers. This should come as a surprise to virtually no one. Each year we consume just over 7 billion barrels of petroleum. We eat 8 billion chickens, with 1.25 billion chickens eaten on Super Bowl Sunday alone! We buy 7 million new cars annually. In terms of technology, over 70 million computers are also sold every year. Smartphone and tablet sales have exploded with over $55 billion in sales for phones alone projected in 2016.
Furthermore, we are now entering the era of the Internet of Things, or IoT. More than ever, devices that aren’t normally connected to a network or internet are doing just that. We’ve got doorbells, microwaves and even a garbage can that will post pics of your garbage to Facebook in an attempt to shame you into recycling.
Who’s Hogging All the Bandwidth?
What do all of these items have in common, you ask? Bandwidth. Every single connected computer, device, phone, tablet and personal shaming trashcan needs an internet connection to connect to the cloud infrastructure to upload data, download data and, if you aren’t recycling, post your garbage to your social media account. This is what is leading to the looming infrastructure problems.
What people do not realize is that we are consuming increasing amounts of space for all of our data needs and we’re filling up the data centers of these cloud providers faster than ever. Cisco’s analysis has shown that we increase bandwidth usage about 20% every single year. Therefore, the consumer demand for broadband connections increases as more people want to use all of the services available that require progressively more speed. I mean, who wouldn’t want a pair of internet-connected brainwave cat ears!?
Despite the rapid growth, this, in and of itself, is not an issue. Cloud providers are happy to perpetually expand their space and product offerings, providing the user base is also growing their revenue stream to keep up with the expansion. Cloud providers will also use space optimization strategies such as lowering the time retention for deleted items and advanced data compression techniques to help slow the growth.
Understanding the Need for Speed: The Broadband Speed Users Actually Get
First, to be considered “broadband,” an internet connection must be at least 25Mbps download speed and 3Mbps upload speed. Second, there are roughly 280 million internet users in the United States, 91.54 million of which are broadband users as of the first quarter in 2016. Furthermore, the average broadband user isn’t currently experiencing speeds at 25Mbps, thus creating a gray area of what actually constitutes a “broadband” user.
“Broadband” users actually get around 12Mbps as of 2015, a 13% upgrade from 2014 with speed projections increasing for 2016 and beyond. This puts us easily over a BILLION megabits (or 1 petabit) of bandwidth at any given time, which makes Netflix’s ability to consume almost 40% of all bandwidth at peak times seem utterly insane.
So, what does this mean for the future of infrastructure in the United States? A few things, actually. We will continue to increase our usage as our broadband user population increases as does the average speed offered by the internet service providers.
Furthermore, in 2015, President Obama announced his “moonshot” and took the gloves off of High-Performance Computing by issuing an Executive Order that established the National Strategic Computing Initiative or NSCI.
As a result, major manufacturers like Intel, HP and Dell are racing to up the ante by packing more performance and bandwidth into smaller footprints. Imagine a data center one-tenth the size doing over fifty times the work with less energy, or better! That’s the plan and everyone wants to be first. This move was applauded almost universally by the technology and infrastructure crowd, including myself.
Become a Better DevOps Pro
Learn more about hardware and software in our FREE course.
The Digital Gorilla in the Room
I kind of hate to rain on the virtual parade, but I must. For all this great growth and advancement, we have a serious issue on the horizon that we’re already starting to feel at the infrastructure level, though it doesn’t quite affect users just yet. As I try to make a rather complex subject, mathematically at least, easier to grasp, please understand that this issue is not just a US problem, but a global one.
By 2020, it is estimated that IoT devices will reach a population of about 34 billion worldwide. This will require trillions of dollars of bandwidth that the worldwide economy is going to realize sooner than later, and everyone wants in on this, from major corporations to governments. There is a major race to own and control as much bandwidth as possible.
This is also one of the reasons why Net Neutrality is such a hot topic. On the surface, it’s an issue of creating a level playing field for competitors bringing online products to market at potentially the same speed.
However, what is not talked about is how on earth these ISPs are going to supply everyone on the planet with fast access to the ever increasing offerings of the internet. Already we’ve seen fights over bandwidth metering between major providers and major bandwidth users, most notably the bandwidth disparity between Comcast and Netflix.
Where the real problem arises is in bandwidth expansion, specifically laying the optical cable (aka Fiber Cable or Fiber Channel) fast enough to keep up with the increasing bandwidth demand. Also, the expansion of high-speed internet into the remaining US markets that are currently lagging behind the rest of the country and modern world. More Americans each year acquire a broadband connection, meaning that they’re either replacing a low-speed internet connection (yes, dial-up still exists) or getting connected for the first time.
Squeezing Out the Competition
The response by major providers to this growing problem has been rather interesting. Let’s look at Comcast, the largest ISP in the United States, and how they are trying to fix this issue.
First, a huge amount of Comcast’s user base is not connecting from their homes or businesses via a fiber connection. They’re using a copper coax cable connection that eventually connects to Comcast’s primary routing infrastructure which, like all providers, is fiber.
Comcast has been working on new compression standards to eke out more bandwidth from the copper cable, but also realizes at some point they’ll only be able to take compression and speed so far on a cable that isn’t nearly as effective as fiber. This standard, known as DOCSIS, has evolved over time and is starting to reach speeds of one gigabit over coax with the new DOCSIS 3.1 standard though this level of speed is rather prohibitive at the moment.
Comcast knows that, as much as they are using and supporting the existing coax cable infrastructure, this cabling cannot compete long-term with the performance of fiber. They are working night and day to lay more fiber and expand this offering to everyone as well as trying to balance offering faster speeds than their local competitors with overall bandwidth needs of that geographical location. This race, and tightrope walk, is not unique to Comcast at all. Every major ISP is currently doing this.
Become a Better DevOps Pro
Learn about containerization, hardware, agile and more in our FREE DevOps course!
So, Why Not Just Lay Fiber Everywhere?
To begin, laying fiber is expensive. Permits have to be obtained from local governments and trenches have to be dug, often around major obstacles like private property or highways. The consumer price has to be appropriate for the local customers, which means the return on investment in infrastructure may not be realized potentially for years. Though the obvious long-terms gains of owning the local infrastructure tend to outweigh the upfront cost. These basic logistics mean that every major undertaking for running fiber has to be weighed carefully, and local municipal battles have to be approached strategically. This is the first major hurdle to the issue.
The second hurdle, and the larger issue, is two-fold. First, the providers cannot lay fiber cable fast enough to supply the bandwidth demands of their users, as mentioned above. Second this bandwidth demand is increasing faster and faster.
With the explosion of mobile computing and IoT, we’re seeing providers attempting to slow this growth through bandwidth caps that are rather expensive for the customer to go over. These caps are estimated based on the use of the average subscriber. However, that usage keeps increasing as people add more and more connected devices.
Verizon’s Limited Network Capacity
Verizon is the perfect example of this. In 2015, New Street Research released a report that said it’s possible that Verizon could run out of network capacity in two to three years. Verizon is the largest mobile carrier in the nation and has a vast infrastructure to support the millions of devices that subscribe to their service, not to mention their FiOS fiber internet offering to the public for wired connections.
Verizon can delay this by essentially reusing the 2G and 3G spectrums for 4G use through a method called “densification,” however that will only last so long. Aware of this issue, Verizon is scrambling to continuously expand their infrastructure. Remember, a mobile device may connect wirelessly, but that connection terminates into a wired infrastructure for routing and channeling. As Verizon rolls out 5G to support the exploding mobile market, the fiber infrastructure and bandwidth in their data centers must increase as well or be more efficiently handled to offset the growth.
And that is the core of the issue here. Every single consumer and business internet connection that is ever expanding its need for bandwidth inevitably routes to a data center to get routed to other data centers where their cloud data is. The fiber that connects all of these main routing points together is tough to expand, as mentioned above. These ISPs cannot, nor would they want to, stop the growth of their consumer base and it is growing revenue generation. With over 95% of all digital bandwidth moving over fiber connections, the fiber itself is starting to become a bottleneck, especially for long distance cables, such as the TransAtlantic fiber backbone.
Walk Into the Light
Aside from running increasing amounts of fiber (which all companies are currently doing, just not fast enough at the moment) or creating bandwidth caps to curtail excess use, the solution is to find new ways to make the existing infrastructure more streamlined and efficient.
To get the most out of our current fiber connections, there is a race to perfect photonics, a method of moving data much faster and more efficiently through fiber connections and other mediums such as silicon. In 2015 we’ve seen this come to fruition and, while it makes the existing cabling way more efficient and allows for greater bandwidth, it still requires major investment since all new routing equipment will be needed to take advantage of these new speeds.
Furthermore, a research team created a method of moving data using amplification and actually shaping the light spectrum. They have been able to effectively double the bandwidth that can move through fiber cable and even increased the distance of the transmission! Check out their article abstract and be prepared to get your engineering nerd on while doing so.
How Do We Avoid a Bandwidth Shortage?
So, are we going to see a bandwidth shortage in our time? Hopefully not. What we are likely to see is a slow increase in the cost of the internet to offset the cost of consuming more. Our monthly internet bills will increase, subscription websites may also increase in cost, as would the cost of advertising on “free” sites.
Our growth is only going to continue going up, and while it may occasionally slow down, it will continue to increase. The next phase beyond this particular problem is going to be a push to consolidate functions and capabilities into a single device that can replace several others. The smartphone and phablet platforms, like Android, is the beginning of this digital revolution, but that is a discussion for another article.