In Australia like with most other large policy areas, the debate around the NBN has been heated. Unfortunately the natural outcome of our two party political system has meant that it has become highly political with the two sides arguing over the technology choice and deployment method.
For those not familiar with the details, the debate broadly centres on whether we should use FTTP (fibre to the premise) or FTTN (fibre to the node). The argument in a simplified form says that FTTP is a superior product but more expensive. Conversely FTTN is claimed to be cheaper, sufficient and quicker to deploy.
A lot of the finer details are debatable but these are generally accepted facts:
- FTTP is a superior product, in terms of speed, capacity and latency.
- FTTN is cheaper to deploy.
The debate has raged about just how much more FTTP would cost. This is important as the extra cost above and beyond a FTTN deployment would determine whether it’s worth paying the premium for FTTP. Most people are willing to pay more for a better product, but only to a certain point.
The on-going maintenance costs of each scenario have also been explored.
My personal position is that FTTP is preferable for a few reasons. Of course this centres around the fact that I run an IT company that focuses on cloud services delivered to SME’s.
So in this post I’m going to focus on the benefits of ubiquitous NBN and how its availability changes the cloud equation.
Whenever you have some serious business grade cloud applications being deployed, bandwidth is inevitably something that needs to be considered. For cloud services to really take off in the SME market, I believe 1Gb bandwidth and low latency is needed.
When you have that level of bandwidth you are able to virtually extend your datacentre beyond your physical premises. This is where cloud services start to really shine.
FTTP is capable of these speeds, however FTTN in its current state is not.
Availability & Ubiquity
There has been a long standing argument that fibre has always been available to those who needed it, namely businesses.
Whilst this is true, the cost of installation is quite high and runs in the many thousands. For a company that is just starting out, it’s difficult (and very rare) to be able to justify such an expense so early on.
This inevitably means physical onsite infrastructure is installed. As the company grows, this infrastructure tends to grow as well. At some point, even if the cost of a fibre install becomes economically viable, the cost and complexity of migrating from onsite to the cloud becomes a massive road block.
However if you take the install cost out of the picture (aka NBN), suddenly businesses have some interesting options when deciding on their IT deployment. For brevity I’m only going to dive into a few of them below.
One of the most basic necessities of any business is data storage. Today’s businesses are generating more and more data and this inevitably leads to hardware upgrades to keep up. Not only that but hard drives eventually fail, which is hardly a rewarding experience.
Cloud storage has recently seen some growth because of its benefits (e.g. Dropbox, Sky Drive & Google Drive). No CAPEX required for hardware and no maintenance to ever worry about.
But cloud storage for business has so far been underwhelming. The major reason is that it’s too slow. Not long ago I removed the public cloud component in a client’s storage architecture because it was too slow and synchronisation was delayed and unreliable.
Part of the problem was the cloud vendor, however more broadly speaking this is a bandwidth issue. The experience between sending a file over a 10Mb connection to the cloud, compared to sending the file to a file server on a 1Gb internal link is vastly different. However once you have 1Gb internet bandwidth, the cloud effectively runs as fast as your internal networks backbone.
With this much bandwidth, together with virtual private networks you can effectively deploy cloud storage that acts like local onsite storage. Essentially under these circumstances, the typical onsite ‘fileserver’ is no longer necessary.
Probably the most basic server compute that a business needs is a directory server, in most cases Active Directory. Recently it has become more popular to virtualise these servers, but it’s still considered fairly bleeding edge to be running it in the cloud. However the most pressing technical hurdles were recently overcome (VM generation ID support on Azure).
With that issue resolved, coupled with low latency, high bandwidth and a VPN, even the most fundamental of IT infrastructure can be run in the cloud.
Of course if something so fundamentally critical can be run in the cloud, just about every other workload can be too. Having a 1Gb link to your cloud provider gives you access to unlimited compute, in terms of virtual servers, with performance comparable to onsite infrastructure.
Under this model we can start to truly see a cloud first approach to IT infrastructure.
One of the main functions of disaster recovery is getting your data back online and available. Incredibly there are still many organisations that are using tape drives as part of their disaster recovery plan. Today, this is almost like a consumer using a tape-style Sony Walkman or VHS.
Cloud backup as part of disaster recovery is starting to appear more and more today, however its effectiveness is hampered by bandwidth constraints.
As an example, let’s say a typical SME has around 1TB of business data. On a 10Mb symmetrical DSL line, also quite typical, it would take roughly 11 days for full data recovery!
In IT terminology, the recovery time objective (RTO) is pretty poor. On a 1Gb link however, it would take roughly 3 hours for the same recovery.
However the most interesting thing about disaster recovery under a true cloud-only infrastructure model is that disaster recovery plans change completely. Once your entire server infrastructure is in the cloud, your disaster recovery plan simply needs to mitigate for vendor outages.
When mitigating vendor outages, you basically need to incorporate different availability zones and possibly multiple regions in your architecture.
This change alone has huge cost and complexity reduction implications.
At my last place of work they had deployed pretty much the entire Microsoft Unified Communications stack. The two biggest components for the end user were Outlook and Lync (now Skype for business), with deep integration between them.
I must say the experience is quite impressive. With video conferencing, VOIP audio and instant messaging. It really makes distant communications more personal, physical separation between colleagues is massively reduced.
With Office 365 providing Skype for Business as a cloud service with upcoming PSTN capability, this opens the door to having video conferencing with the outside world. This of course requires solid bandwidth and low latency. This can reduce travel costs and make remote workers much more viable.
Why the cloud matters
So why do businesses need to embrace the cloud anyway? Because we believe that the shift towards cloud computing is inevitable, simply because the economies of scale provide a better product at a lower cost. Crucially, the economics of the cloud over time continue to improve as the cost comes down in a highly competitive market.
But really the shift towards the cloud means the modernisation of businesses and making their IT infrastructure much more agile in respect to scaling and change.
The other benefit of easily accessible cloud infrastructure is that it brings enterprise grade services to the masses. No SME could economically deploy the infrastructure internally to run and maintain the above mentioned Unified Communications services. This levels the technological playing field between SME’s and large enterprises that have big technology budgets. This is good.
A quick mention… It seems that fibre to the basement (FTTB) has been announced and offers almost the same benefits as FTTP, without some of the associated install issues.