Hi, this is Wayne again with a topic “What Makes Apps and Websites So Reliable?”.
It’S 3:00 in the morning and you can’t sleep so the natural response is, of course, to bust out your phone and waste some time on Reddit get to swiping on tinder or blow a few more dollars on that freemium game. That’S quickly becoming a real problem, and luckily, for you, all of these services work without a hitch, because they’re all being provided to you from huge data centers that are expected to be reliable, no matter how many people are connected or what time of day it is. But how do they stay running with almost no downtime? Well, our friends at IBM sponsored this video to tell you guys all about it, and it starts, of course, with having facilities and computers that are purpose-built to handle lots of incoming connections now, although some services have their own data, centers many others who either don’t need Or can’t afford a dedicated facility or need additional capacity, actually rent server, space and processing time from a larger companies. A strategy called colocation data. Centers are typically made up of lots of servers sitting in racks to maximize how many machines can fit into one bit of floor space.
But of course, setting up a data center is a lot more complicated than just throwing a bunch of servers into a warehouse and calling it a day. Larger data centers hold so much equipment that they are actually built to be sturdier than your average building to accommodate all the extra weight from these large racks of servers. Some of them are so large that the workers inside are even given small vehicles like scooters or bicycles, so they can get around and troubleshoot issues more quickly. Also, those servers generate a ton of heat, so elaborate cooling systems, including water chilling, are often employed. Additionally, data centers are often laid out to be more efficient. For example, servers will usually either face toward each other or directly away from each other to create what are called corridors of hot air that can be pushed out more easily.
But when there’s an environmental hazard that isn’t can huge problems can result. For example, a few years ago, Facebook actually had weather in the form of clouds inside one of its data centers, which caused some of the equipment to short out so humidity. Control can also be very important for larger operations. Some servers are even designed to withstand even more serious hazards like earthquakes, using braces and extra floor mounting.
So, of course, data centers are protected with advanced physical and human security, but what about the more technical challenges? Well, aside from encrypting it data is often kept safe by spreading the processing and storage across multiple locations, rather than having them just on one single machine in one place to make this simpler servers are very often virtualized, meaning that one physical server can be seen as Several different systems – this is incredibly useful because it allows a much greater number of tasks to be performed by one server and that’s really important for colocation. So as long as each virtual machine is separated well enough from the others, this can even bolster security load. Balancing is another technique that ensures that servers are being used efficiently. I mean you, don’t want a situation where some of the servers are getting slammed with requests and extra processing, while others are sitting idle like that kid who contributed nothing to your group projects back in school.
So, instead servers are often configured to have their workload and their data balanced more evenly between them. This prevents overloading of certain machines and bottlenecks. This is also often done automatically in cases where one server might need to be taken down for maintenance, so that whatever it was working on can just be picked up by other servers in a similar vein. Redundancy is a critical feature of any modern data center. Copies of data are usually kept on multiple servers or even across multiple data centers, and these facilities often have multiple pipelines leading out to the public Internet in case. One Internet service provider has a problem.
So all this means that enterprise-grade data centers are designed for at least 99.999 percent uptime, which works out to about five minutes of time offline per year. Some of them are actually even more reliable than that, and while websites and online services do obviously go down, sometimes it really is amazing. How much has gone into making sure that we can access nearly anything, nearly any time like, if only everything in your life was that dependable? Right again, this article is brought to you by IBM? Did you know that resiliency is the most important facet of your data center? I mean you can have all the speed performance and features in the world, but if your servers are down, who cares IBM Z? Has the industry-leading resiliency needed to ensure that your bottom line doesn’t suffer from planned maintenance or unplanned downtime? Your whole data center could go up in flames and your recovery could happen without a hitch when it comes to continuous availability, IBM Z, customers achieve five and six nines.
So that’s ninety nine point: nine, nine, nine and ninety nine point: nine, nine, nine, nine percent or greater uptime and they say the smaller IBM Z footprint can essentially replace as many as 100 servers, so reduced downtime scaled to support large workloads and get peace of mind With IBM Z, the leader in data center technology, so thanks for watching guys like dislike check out our other videos, leave a comment. If you have this suggestion for a future fast as possible and don’t forget to subscribe, because even the most reliable data center can’t give you a notification, if you’re not subscribed, Kenneth .