Its a Winter Wonderland… Awful for Business Continuity

February 18th, 2014

Anyone living in the Mid-Atlantic region is well aware of the recent weather conditions wreaking havoc on local residents and businesses. Just last week, PECO had over 600,000 power outages in the 5 county Philadelphia area. Many of these outages were businesses, and in some cases lasted over 4 days.

Can you imagine the amount of lost productivity for a 4-day total outage to a business?

We can. We see it everyday. Small to mid-sized businesses still heavily rely on local servers at their office to run critical apps for payroll and accounting, not to mention email. Email is easy to put offsite, but how do you add business continuity to office apps like Quickbooks? Remote Data Backup is NOT the solution. If your office is dark, having a copy of your data in the cloud does not help you get back up and running. What you need is true business continuity.

For most businesses, the simplest solution is using virtualization to replicate your core office servers and desktops to a remote a data center. The servers are replicated exactly, including all applications and data. The desktop environments are built to contain basic applications and custom apps that talk to the server to effectively replicate a working office environment. Everything is pre-built and put into standby mode. Live data is then backed up daily.

When an emergency event happens and the office goes offline, employees can make use of their home internet access and remotely connect to the virtualized desktop environments sitting at a safe and secure datacenter. From those desktop sessions, they can access replicated copies of their office file servers, office apps like Quickbooks or ACT, and even email.

A recent study shows that even during widespread power outages, 8 out of 10 office employees typical have power and internet at home even when their employers office goes down, and if they dont have power or internet they can be mobile and find a location that does.

What’s the alternative?

Add a generator to your office and some form of radio based 4G internet access. 4G internet will cost between $50-$100 per month plus a few hundred in startup cost for equipment. A generator will cost approximately $25,000 to $50,000 (depending on building load size) for most small to mid size business in the 2,000 to 10,000 sqft size range. That doesn’t include semi-annual maintenance and testing, and during a failure, extended run-time beyond 24 hours can be difficult.

Virtualizaton is much more affordable…

To virtualize a server and 5 desktops, you are looking at approximately $250-$300/month recurring with maybe $1000-$2000 in one-time setup fees. Thats a much better alternative to a generator install that may not even be that reliable. Better yet, this DR solution also acts as a data backup solution which most businesses need anyway.

Ditch the generator and virtualize your office for true business continuity and disaster recovery!

 

 

Why the Cloud is Awful for Your Business

November 1st, 2013

In my 15 years of IT experience no term has annoyed me more then the “Cloud”. For starters, the term “Cloud” is a simplification of the idea of hosted services for non-technical decision makers. For the past 10 years or so businesses have been successfully using hosted services in many ways, what is changing is how that service is conceptualized and how it is marketed. At the same time it is also becoming less and less transparent.

Whats the difference between a company maintaining a hosted server environment in a datacenter vs a cloud services?

Effectively there is no difference. Many companies buy and maintain there own servers, place them in a secure datacenter, and achieve a stable hosted environment. But in a world of cost reduction, some companies pinch their IT budget and lose key staff and begin to outsource. This is where cloud services latched on. Cloud providers rush in and convince managers that outsourcing everything is the way to go, no hardware to buy, no large staff needed, just pay us to do it all for you for one flat fee.

Sounds great right?

The problem is the cloud provider needs to make money too, and since they are effectively running and deploying the same hardware you would use, they need to cut corners to make a profit. The easiest way to do this is using refurbished hardware and over subscribing it across multiple clients, and also running a more cost effective datacenter with less features. By cost effective datacenter, I really mean a “cheap” datacenter. You might say to yourself, how can a cloud provider get away with this? Simple. There is no transparency. 95% of cloud providers never disclose where their datacenter is or what its capabilities are. You cant go and see it for yourself. Because the “Cloud” solution is cleverly marketed, buyers forget to verify that the service is powered by an actual reliable network and facility. This happens whenever products are sold and re-packaged, you assume the provider takes on that responsibility and since they have an SLA in their contract, its not your problem. But it is your problem. Its still your data, your application. You need to know where it lives, you need to confirm the facility is redundant, has fire protection, has an aggressive and high speed network, uses top-of-line hardware not refurbished servers.

Cloud Services allow providers to effectively hide their operations from plain view. In the current environment of accountability, it is extremely important to make sure you know whats going on behind the scenes. At the end of the day if your cloud provider losses your data, yeah, its their fault, but guess what…. your data is still gone. Having someone to blame for a failure doesn’t make it any better. Why not try and avoid the failure from ever happening.

 

 

 

Its important to choose who you want as a Customer

March 20th, 2013

Customers may choose you, but its also important to choose who you want as a customers. For years, I have been telling people that one secret to success in the datacenter business is be picky when it comes to what kind of customers you decide to provide service to. This may go counter to popular ideas about business, that is, anyone willing to pay should be a customer of mine. That may hold true is you sell hamburgers, but when you sell datacenter space and IP backbone access, its very important that you stay away from certain types of customers.

Customers I traditionally avoid:

  1. Gamers
  2. Adult Sites
  3. Email Marketers
  4. Proxy Providers

Why should the above group be avoided? Lets look at the four factors that make a customer undesirable. First, can they pay their bills reliably. Second, will they be a long term customer. Third, will they require a lot of customer service. Fourth, will they impact your resources negatively.

Gamers fail three of the four key criteria, they are bad at paying bills, they are not long term clients, they dont require a lot of assistance, but they do negatively impact network utilization and are more prone to DDoS attacks. Adult Sites can be good payers and can be long term clients, but they also require a lot of assistance and have a negative impact, adult sites get DDoS attacks frequently, their bandwidth usage is highly volatile, and their actions of handling illegally copyrighted material cause major headaches and liabilities. Email Marketers fail all four criteria, they’re short term clients, are not good payers, abuse IP resources, and are strong DDoS targets because everybody hates spammers. Proxy Providers are interesting, they are good payers and long term clients, but they have a lot of baggage since proxy providers can’t control what the users behind the proxy do. In our experience, proxy providers cause a lot of headaches and generate a ton of abuse complaints from their proxy clients doing everything from sending spam to running botnet attacks.

Your are the company you keep….

The type of people you have in your datacenter should reflect who you are as a business. I run a very stable, reputable business, so as such, I prefer that my customers be stable, reputable businesses. In the long term, this philosophy has worked very well for me.

Choosing a local VOIP Provider for your Business Phone

February 19th, 2013

We recently switched over to a new VOIP provider for our phone services. In the past we were using one of those large national providers, service was okay, but it could be much better. Telecom is normally treated as a commodity, that is, if all the features are there its a matter of who has the lowest price. Yes, everyone has the same features, and prices can vary, but there are several intangible aspects that you cant easily identify.

We decided to use a local VOIP provider who is in our area because we felt that having local access to the company would be beneficial. The provider, Essenz, Inc., is based in Lafayette Hill, PA – just 20 miles or so from our offices. Their business phone solution had all the features we were looking for and the price was very competitive. Best of all, the phones were personally delivered to our location and setup by a technician – this is free and included in their service. Because they are local, they also offer same day (4 hour response) hardware replacement if the phone fails.

These bonus features make all the difference, and trust me, if your phone dies, waiting 1-2 business day for a replacement to be shipped is not ideal. I encourage people to look locally first, you’ll be amazed how many great providers are right in your backyard.

Influx of Colocation in Philadelphia post Hurricane Sandy

December 4th, 2012

We have seen a dramatic increase in colocation activity in the Philadelphia area following Hurricane Sandy.  Sandy effectively knocked out several North Jersey and Lower Manhattan facilities for over a week.  The Whitehall St. facility in Lower Manhattan was without power for over a week.  Then there were datacenters that had power but lost IP connectivity when their circuits from Manhattan went down.

On top of all of this, a local Philadelphia facility (Voicenet) decided to shut down its colocation operations.  We have actually moved in over 5 clients just from Voicenet alone, and another 6 clients from various providers on the Metro NYC area.

What did we learn from Hurricane Sandy?

1. Don’t put your datacenter in a building with a below grade electrical room.

2. Don’t put your datacenter in an area without diverse IP POPs.

The first rule is obvious, or so you would think, but amazingly people in NYC build telecom operations in buildings that have below grade electrical rooms. In the case of Whitehall, not only was the electrical room below grade, but so were the fuel pumps for the generators.  The second rule is broken all the time.  I can’t tell you how many facilities claim to be multi-homed with multiple carriers, but when you look closely, you find out that all those carriers come in via a single fiber ring that runs to a single POP.

There was one datacenter in Boston that had fiber running to Whitehall St. in Lower Manhattan, so when Whitehall went dark, the facility in Boston lost all IP connectivity. Boston has local IP POPs, why get all your connectivity out of Whitehall in NYC? Answer is cost.  It’s cheaper to put everything on one big pipe and send it to a heavily trafficked POP like Whitehall, but like the old saying…. you get what you pay for.

 

 

Voicenet Outage – Northeast Philadelphia

October 19th, 2011

Another local datacenter outage to report. Voicenet facility located in Northeast Philadelphia had a major network outage last night. Reports from a collegue of mine that manages equipment there indicates the outage lasted for about 45-60 minutes. The outage even took out Voicenet’s phone system so customers who have equipment in the Voicenet Datacenter couldn’t even call in and complain.

I cant say it enough… Dont colocate equipment in datacenters that dont have true diverse network connectivity. Voicenet claims redundant fiber, but its just a single fiber ring. Yes, a ring has redundant fiber and two paths to prevent a fiber breakage, but its still a single ring operated by a single entity, with an equipment SPOF (single point of failure) at the other end. The only thing a ring protects against is back-how digging up the street or a tree falling down. Datacenters needs to have true diverse fiber. That means separate fiber paths coming in via separate entrances, and the fiber itself must be owned and operated by separate entities with completely separate routing platforms.

The scary thing is there are serveral datacenters in the Delaware Valley that operate off fiber ring topologies. Stay away from these datacenters, its just an outage waiting to happen.

Why did my Data Center UPS Fail?

October 7th, 2011

I hear this all the time. Most people move out of a datacenter because something bad happened, and its usually a major power failure that causes the most trouble. In this article, I am going to outline and analyze a power failure event that occurred at an unnamed facility. This is a true story.

About 2 years ago I fielded a call from someone who lost power at their current data center provider. In addition to being down, they also had some equipment failures (power supplies and some RAM went bad in a few systems). Their provider told them that nothing was wrong with the UPS, rather, it was an issue with the utility caused by a brown out. As soon as I heard this, I told the person that this explanation was completely bogus.

Lets recap the cardinal rules of a good UPS:

1. An online UPS setups should always provide clean line power regardless of supply.

2. If an online UPS fails, an auto-sync transformer bridges line power and utility within 1 Hz and no power is lost, only backup capability is lost.

And lets recap what you need to do in order to make sure the above rules always apply:

1. Check your batteries every 3 months.

2. Replace a battery as soon as its internal resistance rises by 10%

3. Replace a battery as soon as its 4 years old, even if its internal resistance is still within spec.

4. Provide suitable cooling to the UPS.

5. CHECK THE BATTERIES.

I cant stress enough how important batteries are. The entire UPS is built around the concept of having working batteries. Almost every line-effecting outage of a UPS is due to a battery problem. At Quonix, we use Liebert Series 300 UPS systems that have had inverter boards fail, induction coils burn out, and input filter short out, and we NEVER lost output line power. That’s why the Liebert’s cost so much, they are designed to handle failures, but it requires good batteries.

Getting back to the story about the brown out. Any UPS that experiences a brown out or any kind of dirty power, would immediately engage batteries in order to provide clean power while it activates the GENSET cut-over. This requires the UPS to run on batteries for 5-7 seconds. If the batteries cant hold, the UPS will drop offline into bypass mode and auto-sync to utility line power. Once a UPS goes into bypass and syncs to utility power it no longer provides power protection or line conditioning. So all the dirty power goes straight through. If power was lost, GENSET power now comes straight through. And when utility power returns, the GENSET cuts out causing another small blip. This is why the server power supplies and RAM went bad. The dirty, and possibly surging power came right through the UPS into the rack cabinet.

Many providers dont properly maintain their batteries. They just assume the batteries will last 4-5 years. Not the case. I’ve seen brand new battery cabinets have 1 battery go bad after as little as 1 year. Sometimes its just a random manufacturer defect. And in many cases, all it takes is 1 bad battery to foul the entire array.

Want to be sure if your provider is on top of things? Easy, just ask for a copy of their UPS and battery preventative maintenance contract. If they have one, and they should, it should be easy to fax or email you a copy. You can even request a battery report. At Quonix, the vendor we use for our battery maintenance sends us a detailed graphical report with the health of each battery – voltage, impedance, internal resistance, temperature, and age.

Repairing Tate Access Floor Tiles

October 6th, 2011

How to repair floor tiles?

For this article I am referring to the newer style of Tate access floor tiles. The newer style has a single piece of laminant that runs from edge to edge. The older style tiles had the laminant end about a quarter inch short of the edge with the remaining space filled with a black edging strip that frequently snapped off.

The new style is great, but over time the laminant will start to pull away, especially in data centers with low humidity. Its simple to repair.

The laminant is held in place by contact glue, similar to a kitchen countertop. Contact glue can be loosened and re-hardened with heat.

To reattach your Tate laminant get a standard clothing Iron – the kind with a non-stick bottom. Set the iron temperature to medium and turn off the steam. Obviously, do this repair work outside the datacenter. Place the iron on the tiles laminant surface and slowing move it around. The laminant surface needs to be heated for at least 2 minutes. After properly heated use a surface roller to apply even pressure over the top of the laminant and press it down hard to the tiles underlying metal frame. Continue to use the roller until the surface has cooled down. At this point your laminant will be 100% re-attached.

 

Why local hosting providers are better…

March 17th, 2011

We all hear how its good to buy our produce locally, but what about buying web hosting services locally. Interestingly, your company will be better off if you host with a local provider, and it has nothing to do with better support and everything to do with search engines and regionality.

Search Engines such as Google or Bing like LOCAL results

When you do an internet search, the search engine knows where you are and in turn will display search results that are close to your region. For example, if you do a search for lawyers, Google will undoubtedly return results for law offices in your area. It does this using IP geography. From your IP address, Google has a rough idea of why you are in the world.

The same technology that determines where you are located can be used to determine where a website/company is located. Now lets be honest, search engines know that most websites are hosted outside their served region. However, if your website is hosted from an IP with a geographic footprint of say Ohio, and your business is in Ohio and mentions this in the indexable content on your site (mailing address, area code, ectc.,.), the combination of these two things is very positive. It tells Google you are physically local to that area and cyber local to that area. All of this is good stuff for your ranking.

So go ahead and host your website with a local provider!

Cloud Computing and VPS Confusion

February 24th, 2011

I hate terminology. I especially hate terminology when people gets things confused and use it improperly. My latest annoyance is the confusion and miscommunication of the terms Cloud Computing and VPS (Virtual Private Server).

The problem is that some hosting providers use the term cloud computing synonymously with VPS, and as a result the public now thinks they are one in the same. People call me sometimes asking if I do Cloud services, and I know they are talking about VPS, but they have been convinced that Cloud services is what they need. They mainly do this because Cloud Computing sounds better then VPS, and much of the mass media lately has started to emphasize “The Cloud” as a viable product for everyone.

What is VPS?

VPS is virtualization, the concept of running multiple server OS’s inside a single physical server. If you have a small hosting environment, VPS is ideal. Your VPS will sit on a single server with 10-15 other VPS’s that belong to other customers. You and those other customers all share the resources of a single server. VPS is not Cloud, because everything resides on a single physical machine, that machine always provides your VPS with CPU cycles, RAM, and storage.

What is Cloud Computing?

Cloud computing has been around for years. True cloud computing is the concept of a large pool of servers that work together to provide CPU cycles for computation. End-users send computational work into the cloud, the cloud processes it very quickly, and then it returns the computed result. From a fiscal standpoint, you only pay for the brief amount of time the cloud needed to work on your computations.

Cloud computing is obviously not VPS. The majority of VPS is used to host data. Data hosting has absolutely no need or application to Cloud computing since hosting is inherently very light on CPU processing.