Docker代写:EETS8355 Docker Containers

按照步骤,练习部署虚拟化容器Docker.

Docker

Requirement

Virtualization is very important, useful and widely used technology in Data Centers today, and we all know that by now. All the labs we have covered were based on technologies which virtualize the Data Center. We will continue further in exploring few more technologies in the same category. In today’s lab we will discover and learn another widely used technology (which is quite old but has found a new way of implementation and hence gained popularity and wide acceptance) called containerization.

We can say VM is an abstraction of physical hardware turning one server into many servers. Hypervisor allows many VM’s to run on single machine and each VM includes a full copy of OS, apps, binaries and libraries

Container is a standard unit of software that packages up code and all its dependencies so that application runs quickly and reliably from one computing environment to other. Multiple containers can run on same machine and share operating system kernel with other containers, each running an isolated process in user space.

Note: You have to submit screenshots for this lab. Paste screenshots for each and every step you performed including all data files and Dockerfiles.

Launching Ubuntu VM for this Lab

  • Create a new VM using ubuntu20.4 image file. Name the VM as “your name-Docker”.
  • Power on the VM.
  • Get the IP address of the VM and access the machine using Putty.
  • Change hostname to “yourname-Docker”.
  • Add a user with ““, set password as “Dcne123”, add it to root group, reboot machine and login with new user credentials. Perform entire lab with newly created user.
  • Upgrade and update your host.
  • Install curl on your machine. (Attach a screenshot showing curl installed)

Installation

  1. For Docker Installation, we will run a script to install the latest version of Docker.
    The official installation guide to install Docker CE version on ubuntu: https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
  2. The docker service by default can be run only by root. If a user needs to use docker, it needs to be added in the docker group. This can be done by issuing the command ‘usermod -aG docker user-name’. Add your user in the docker group and verify by issuing command ‘id user-name’.
  3. Once this is done, reboot the system for the changes to take effect.
  4. Login back into your host and check which version of docker is running. Issue the command ‘docker info’ to get more detailed information such as number of containers running/paused/stopped, root directory, server mode, runtime, logging drivers, etc…
  5. Verify that Docker Engine - Community Edition is installed correctly by verifying its version and running a simplecontainer with the help of an image.

Launching a Docker Container

  1. Pull the centos6 and latest ubuntu image from Docker Hub (repository) using the following commands.
    • a. docker pull centos:centos6
    • b. docker pull ubuntu:latest
  2. To list the currently available images, issue ‘docker images’.
  3. To run a centos container using docker, issue ‘docker run centos’
    If you check the output carefully, it says that new image ‘centos:latest’ is downloaded from repository. It means if image was not available locally, so Docker pulled image and created a container out of it. Both tasks are performed with a single command “docker run”.
  4. Run two containers based on ubuntu and centos6 images which were pulled before. Make sure that Ubuntu container has your name.
  5. Check all containers created? Are they running? Justify.
  6. Deploy a new centos container which should sleep for 1 minute. And check if the container is running or not? Monitor it for a minute by checking its status again and again. What did you observed and why is the container not running?
  7. Run a new “nginx” container but this time the container should be in running state and should be seen in ‘docker ps’.
  8. Deploy a “ubuntu” container and shell session should get attached to the container you deploy. Here you will notice that you are inside your container and can explore or start performing your tasks specific to that container such as creating an application or webserver, etc. Update your container and then do a SSH to your docker host (Ubuntu desktop machine) and run the following command to see the list of containers running: ‘docker ps’ . Issue ‘docker ps -a’ to list the containers which are stopped/exited.
    Once checked all containers, exit out of your “ubuntu” container and return to Docker Host.
  9. To delete a container, use ‘docker rm container-name_OR_ID’. Delete all containers created.
  10. Check all images present on host. Images are deleted by running ‘docker rmi image- name_OR_ID’. Delete all images from the host.

Creating Image using Dockerfile

A Dockerfile is a text file written in a specific format that docker can understand. It contains all the commands a user could call on the command line to assemble an image. It is in Instruction and Argument format.

Dockerfile always begin by defining a base image FROM which the build process starts. Followed by various other instructions (commands) and arguments. In return it provides a new image which can be used for creating docker containers.

  1. Create a Dockerfile with “busybox” image and name it as ‘busybox:your-name’. Refer attached screenshot.
  2. In your home directory create a folder named “MyDir” and navigate to it and create 2 blank files named - Container1 and Container2. Then create a Dockerfile with following specifications.
  3. Run a container using an image you created in step #2.
  4. Now check contents of “/opt/source-code/“ directory of container you created in last step. Do this without getting inside the container.
  5. Write the difference between ENTRYPOINT and CMD instruction.

Modifying and Creating your own Image

  1. Get back to your home directory and launch a container based on centos6 image with an interactive terminal attached to it.
  2. Once inside the container’s shell, install and enable the following packages.
    • a. initscripts
    • b. nano
    • c. python
    • d. httpd
    • e. iproute
    • f. sudo
    • g. openssh-server
    • h. telnet
    • i. enable sshd and httpd service on boot by using command ‘chkconfig sshd on’ and ‘chkconfig httpd on’
    • j. start sshd and httpd services.
  3. Once the above packages and services are installed, verify the ssh and http service are running or not.
  4. exit the container. The container will be stopped.
  5. Now create a new image from this container and name it as centos6:WEB
  6. Launch a new container using the newly created image in step #5 of this task. You will notice the new container is already having all packages installed and you just need to enable the services now and check status. Once done, return back to Docker host by exiting the container.
    This is the most important feature of docker which makes it an integral part of CI/CD.
  7. Containers we launch on a docker host are running inside the hosted network. Services you enable on them are not available over the network. To expose the services running inside a container, we will have to expose the TCP/UDP port running on a container to local host. Exit the container you launched in previous step and relaunch it using the following command: docker run -it --name="test_web" -p 8080:80 centos:WEB /bin/bash

Once the container is launched, start the httpd service and verify the web page from any browser by accessing your docker host’s IP with port 8080.

Docker Container Networking

When we install Docker, process creates 3 networks automatically: Bridge, None, Host.

  • a. Bridge network offer the easiest solution for creating our own Docker network. It is the default network to which containers gets attached upon creation. It is the private internal network created by Docker on host. Containers from this network get an IP address from range of 172.17.x.x . Containers can communicate with each other using IP address. To access containers from this network we need to map their ports to Docker host.
  • b. In None Network containers are not attached to any network. Hence it doesn’t have any access to external network or other containers. Containers from this network run in an isolated network.
  • c. Host network takes out any type of network isolation between Docker Host and Docker containers. Here we don’t need to map containers to host. We are not able to run multiple containers on same host using same port as all ports are common to all containers in network.

You can create multiple networks with Docker and add containers to one or more networks. Any other container you create on a particular network gets immediately connected to all other containers on the same network. The network isolates containers from other (including external) networks. Any containers on the same network may communicate with one another via IP addresses.

  1. The ‘docker network’ command is used for managing the networks on your docker host. List all the network in the Docker host.
  2. Inspect the previously created “test_web” container and identify the network attached to it.
  3. Run a container named “mustangs” using the alpine image and attach it to the none network and verify it.
  4. Create a new network named”your-name-network”using thebridgedriver. Allocate subnet182.18.0.1/24. ConfigureGateway 182.18.0.1. Verify bridge network you created.
  5. Run a container using ‘docker run -it –network your-name-network centos:centos6 /bin/bash’ and note down the IP address of the container. Get detached from the container but leave it in running state.
  6. Run another container using the same command above and check its IP address. See if this container can ping the container created in previous step, also ping from previous container to this container.
  7. Deploy a mysql database using the “mysql” image and name it as “SMU-DB”. Attach it to the newly created network in step #4 of task E. Set the database password to use “db_pass123”. The environment variable to set is MYSQL_ROOT_PASSWORD. Leave this container running for 10 minutes.
  8. Again, check the details of the bridge that you created in step #4 of task E. You should see 3 containers attached to this network.
  9. Delete all containers and images.

Docker Volumes

  1. Deploy a centos container named DCNE and create a text file containing the text “SMU Rocks” in /root directory of the container. Exit the container.
    Now, start the container and get into the container and check if the data is present or not.
  2. In step 1 you must have observed that data remains inside container even after exit/stopping and starting the container. But it will not be there after deleting the container. There are times when we want to save the data and use it later. Docker Volumes allows us to store data in the Docker host machine and let us access the data even after the container has deleted. Now create a container named DCNE-volume which will store all the data from “/root” directory of container to “/home//volume” of Docker Host. Again, create a text file named “test” containing the text “SMU Rocks” in /root directory of the container. Then exit the container.
  3. Check if the data reflected in “/home//volume” of the host machine.
  4. There are times when containers have to share data amongst themselves, create a (shared) Docker Volume named “Shared-Volume” and attach it to a new container named ‘DCNE-Shared1’. Use “centos” image to create it. “DCNE-Shared1” will store all the data from “/root/-shared” directory to “Shared-Volume”. Once you launch the container, get inside the “/root/-shared” directory and create a file named “DATA” which should contain data such as your-name, SMU ID, SMU Email ID.
  5. Create a new container named ‘DCNE-Shared2’ and attach this container to “Shared-Volume”. Again, use a “centos” image. “Dcne-Shared2” will store all the data from “/root/-shared2” directory to “Shared-Volume”. Once you launched the container, get inside the “/root/-shared2” directory and create a file named “DATA2” which should contain a line “This data is being shared with DCNE-Shared1”. Also, can you see the file created by DCNE-Shared1? If yes, paste the output.
  6. Now delete both “DCNE-Shared1” and “DCNE-Shared2” containers.
  7. Create a new container named ““. Attach it to “Shared-Volume”. Use a “ubuntu” image to create it. This container will store all the data from its “/root/“ directory to “Shared-Volume”. Once container is launched, get inside the “/root/“ directory and check if any data is present or not? What did you observed? Justify.
  8. Check “Shared-Volume” on the host machine. Check its content if anything is present.
  9. Delete all images and containers you created till now.

Docker Registry

Docker Registry is a central repository for all docker images. While pulling image or creating container in all above tasks we haven’t specified the path from where the images should be pulled from, Docker assumed it should be its default repository - Docker Hub, so it pulled them from Docker Hub. If we want image to be pulled from any other repository or account, we can specify same in image path.
We can create our own private repository/registry as well. And use it to run a container, pull/push image, etc.

  1. Create your own private registry with exposing its API on port 5000 of Docker Host. Run it in detached mode.
  2. Create a directory named ““ and then create your own image having a simple instruction for base image (centos) and display “Hello,“ message. Image should have a name - “smu”.
  3. Tag newly created image with your private registry URL and then push it to your (local) private registry.
  4. Remove the locally cached centos and localhost:5000/smu images, so that you can test pulling the image from your registry. This does not remove the localhost:5000/smu image from your registry.
    After this step only your private registry container and its associated image should be there.
  5. Now pull your image from your local registry.
  6. Now create a container using the image you just pulled from your private registry.