Hi, this is Wayne again with a topic “This Little Trick Keeps Netflix Running – Kubernetes & Containers Explained”.
Imagine sitting down at a fancy $ 50, a plate, restaurant and tearing into a 16-ounce steak, with your teeth instead of cutting it. Not only would this be a pretty frustrating experience, but you’d probably be getting a look from your boss, as he wonders why he’s still letting you expense dinner and as with steak, software becomes much easier to manage if you split it up into smaller chunks before eating Or rather using it, this approach is actually used by lots of larger services today, where they split whatever it is they’re providing into lots of small micro services, and our friends at IBM sponsored this video. So we could tell you all about it to explain what a micro services, let’s imagine, you’re using a site like Netflix. Every time you perform some small action on the website, skipping forward logging in paying a bill. All of these functions are handled by different dedicated micro services, a concept that Netflix actually pioneered and all of them are held in virtual containers. You see the traditional way for a site to handle tons of users at once was to run lots of separate instances of the same code in virtual machines or VMs.
You can learn more about virtualization in this article, but the basic idea is that a VM is a separate session of an operating system running inside another OS and a typical server can run many VMs at once, which helps a great deal when many people are accessing A site but running a full VM and the associated software that you need typically requires millions of lines of code and often times a user wanting to complete. Just a simple task might require the server to open up more full VMS, which is pretty inefficient and can end up hogging CPU cycles and other resources. Microservices in containers, by contrast, only contain the code for a specific task.
So we could be talking about just a few thousand lines of code instead of millions going back to our Netflix example, you might have one container for credit card authentication another for the review system, another for the volume, slider and so on and so forth. So if lots of people are using certain micro-services, the system can just create more instances that specific container, instead of having to open up more full fat VMs. In fact, Google runs about two billion containers at any given time because they’re so easy to scale thanks in part to a system called kubernetes. The Greek word for captain which Google developed to manage containers automatically and although this sounds like it might only be relevant to network engineers.
It actually has real benefits for you, the home consumer, if there’s a problem with a service or if the developers want to add a new feature, they don’t have to search through 10 million lines of code to find the issue and then possibly break the entire thing. In the process, instead, they can just change the one or two micro services that they want and leave the others untouched, meaning that fixes and new features can be pushed out quickly with less risk of causing other problems. The container paradigm also offers speed enhancements, as servers can run smaller micro services much more easily without tons of VM slowing them down.
This has back-end reliability implications too, because it usually takes mere seconds to deal with a problematic container, meaning that the potential for lengthy amounts of downtime is lower. So all this means that containers are being used for tons of applications. Games like League of Legends and fortnight rely heavily on containers to reduce lag by easing the load on their servers and Pokemon.
Go also used containers to fix issues shortly after the game’s launch and add new features without disrupting the millions of users who were playing the game at the time. Banks are also using containers with kubernetes to handle loads of transactions at once without slowdowns, and even IBM. Supercomputer Watson, which has been heavily utilized in the healthcare industry, has transitioned to using containers instead of monolithic virtual machines. So it turns out that small containers have actually made life a lot easier kind of like digital Tupperware, a big thanks to IBM for sponsoring this video IBM’s Linux. One is one of their incredible business machines. That’S what IBM stands for you know! No, I’m just kidding.
It’S international doesn’t matter anyway. The Linux one is their ultimate container and kubernetes platform using Red Hat OpenShift Linux one comes with all sorts of great security gizmos like hardware encryption, so your containers are going to be much better protected from all sorts of nasty. It’S and linux one can handle loads of containers up to 2.4 million on a single system, and it can even handle really big containers, which is useful if you are container izing your existing monolithic applications before you refactor them into micro services. This means that your apps run more reliably, especially when lots of people are using them.
Finally, your containers can run on the exact same system as your data, so you don’t suffer delays while data travels across the network. This means faster games and happier cyber monday shoppers. So go check that out right here at the link in the video description. Just try to contain your excitement.
So thanks for watching guys like dislike check out our other videos leave a comment. If you have a suggestion for a future video and don’t forget to subscribe and follow, .