In recent years there has been significant growth in the deployment of enterprise level access control systems, this is due to the technological advances which have enabled substantial benefits over the typical access control installation to a building.
There is likely to be a number of stakeholders involved in the decision to install an access control system, with factors including but not limited to, funding, health and safety, operations and human resources management.
Key stakeholders will continuously talk about ‘compliance’ as failure to comply with government regulations and local law can have serious consequences for organisations, so they are keen to avoid non-compliance.
The smart access control cards that facilitate staff access via an entrance to the building can also be configured via integration of various systems to produce multiple outputs, i.e. attenance reports.
The Server – Often the weak link
Unfortunately no matter how well designed and maintained a system is, it is still vulnerable to downtime as manufacturers of servers are unable to provide 100% guarantee that there will not be a component failure at any given point. It’s also important to take into account the potential impact of a cyber attack on the associated software applications.
The hardware and software elements of an access control system need to be working effectively 24 hours a day, 7 days a week, 365 days a week for it to effectively serve its purpose.
To mention the most basic approach to server availability, data backup, data replication and failure procedures are a necessity. This will help to reduce the amount of time spent on restoring an application following a server failure. At best, data backups deliver approximately 99% availability if backups are only occurring daily – which is typical of many businesses. Whilst 99% availability sounds good, in a year, this equates to 87.5 hours of downtime per year or over 90 minutes per week.
Another solution – High Availability (HA) consists of both hardware-based and software-based approaches to reducing downtime. HA clusters are systems with two or more servers running with a matching configuration, the software keeps application data synchronised on all servers. Therefore when one fails, another server in the cluster takes over with little to no disruption. HA clusters can be difficult and costly to deploy and manage, you will be required to license software on all luster servers which will increase costs.
HA software’s are designed to detect evolving problems and prevent downtime. It automatically identifies, reports and handles faults before they cause an outage using predictive analytics. The continuous. The continuous monitoring offered from this software provides an advantage over the cluster approach which only responds after a failure has occurred. Furthermore as a software-based solution, it runs on low-cost commodity hardware.
HA usually provides from 99.9% to 99.9% uptime, this means worse case scenario there will be from 52 minuites to 4.5 hours of downtime per year. This is significantly better than basic backup strategies.
There are solution available for continuous availability, however this equates to only 5 minute of downtime per year. Two software’s supported by expert continuous availability software are linked and endlessly synchronised through a virtualisation platform that pairs protected virtual machines together, creating a single operating environment.
In the case that one physical machine fails, the software or application will continue to run without interruption on the other physical machine. This means in-progress alarms and access control events, data in memory and cache are preserved.
Constant availability means that no single case of failure is capable of stopping a software platform from running, unlike high availability, clustering and back-up solutions, there is no reboot or failover therefore the downtime is minimal to non-existent.