Projects

Multi-User Docker-based Server Management

A streamlined procedure for managing user environments as docker images in a small team-based environment on a shared server.

Project Overview

This project provisions isolated, containerized Linux environments (Ubuntu 20.04) for multiple users on a shared host server (CentOS 7). It dynamically maps host users to container users using UID/GID matching, ensuring smooth read/write permissions for shared volumes. Each user gets their own dedicated SSH access, a pre-configured Miniconda environment, and a mapped workspace.

Key Features

  • Dynamic User Mapping: Bypasses Docker's root-ownership problem. The container mimics the host system's user ID and group ID on startup.
  • Isolated Environments: Users access their specific containers via unique SSH ports.
  • Persistent Storage: The host user's home directory is mapped to /workspace inside the container.
  • Legacy Kernel Compatibility: Configured to build successfully on CentOS 7 hosts using unauthenticated APT mirrors (Tsinghua) to bypass outdated certificate issues.

What we attempted to solve

Given the out-dated nature of CentOS 7, we were unable to install many softwares, including conda, onto the server. We also faced issues where multiple users had to use similar softwares, but there was no user management systems in place. Our solution was to set up a docker base image that we could then mirror for each person and set up an individual docker container of a Ubuntu system for each user, to which they can directly access via SSH.

Prerequisites

  • Host OS: CentOS 7 (or similar Linux distribution)
  • Docker installed and running
  • Root (sudo) privileges on the host
  • Host users must already exist (e.g., sudo useradd -m username)

Architecture & File Structure

The project relies on three core files:

  1. Dockerfile: Defines the base image (Ubuntu 20.04), installs core dependencies, configures Miniconda, and sets up SSH.
  2. entrypoint.sh: Injected into the container. It runs on startup to create the user, map UID/GID, fix Conda permissions, and start the SSH daemon as Process 1.
  3. start_lab.sh: The host-side deployment script used by the administrator to spin up new environments.
Ubuntu 20.04 was used here because 22.04 was incompatible with CentOS 7.

Deployment Workflow

1. Prepare the Build Directory

Place Dockerfile, entrypoint.sh, and start_lab.sh in the same directory on the host server.

Ensure the scripts are executable:

chmod +x start_lab.sh

2. Build the Base Image

Run this command from the directory containing your Dockerfile:

docker build -t lab_base_image .

3. Deploy a User Container

Use the start_lab.sh script to spawn a container for an existing host user. Provide the username and the designated SSH port.

sudo ./start_lab.sh <username> <port>

4. Configure the Host Firewall

CentOS uses firewalld by default. You must open the assigned port so the user can connect:

sudo firewall-cmd --zone=public --add-port=2001/tcp --permanent
sudo firewall-cmd --reload

5. Access the Lab

The user can now SSH into their isolated environment from their local machine:

ssh <username>@<host_ip_address> -p <port>
# Password defaults to: password123 (forces change/setup later)

Files

entrypoint.sh

This file is used to conduct the UID/GID matching to ensure that the user can access the files inside the container. Note that the default password is set to password123 .

entrypoint.sh
#!/bin/bash

# Default values if variables aren't passed
USER_UID=${HOST_UID:-1000}
USER_GID=${HOST_GID:-1000}
USER_NAME=${USERNAME:-labuser}

echo "Configuring container for $USER_NAME (UID: $USER_UID, GID: $USER_GID)"

# 1. Create the group and user to match the host
groupadd -g $USER_GID $USER_NAME 2>/dev/null || echo "Group exists"
useradd -m -u $USER_UID -g $USER_GID -s /bin/bash $USER_NAME 2>/dev/null || echo "User exists"

# 2. Set a default password and give sudo rights
echo "$USER_NAME:password123" | chpasswd
echo "$USER_NAME ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

# 3. Fix permissions for Conda so the user can manage their own environments
chown -R $USER_NAME:$USER_GID /opt/conda

# 4. Initialize Conda for the new user
sudo -u $USER_NAME /opt/conda/bin/conda init bash

# 5. Start the SSH service in the foreground
exec /usr/sbin/sshd -D

Dockerfile

Dockerfile
FROM ubuntu:20.04

# Setup APT and Mirrors
RUN rm -rf /etc/apt/apt.conf.d/* && \
    echo 'APT::Get::AllowUnauthenticated "true";' > /etc/apt/apt.conf.d/99force-insecure
RUN echo "deb [trusted=yes] http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse" > /etc/apt/sources.list

# Install Core Tools and SSH
RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y --allow-unauthenticated \
    wget ca-certificates bzip2 openssh-server sudo && \
    mkdir /var/run/sshd && \
    apt-get clean

# Install Miniconda
RUN wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh -O /tmp/miniconda.sh && \
    bash /tmp/miniconda.sh -b -p /opt/conda && \
    rm /tmp/miniconda.sh

ENV PATH="/opt/conda/bin:$PATH"

# Copy the entrypoint script from your host into the image
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh

EXPOSE 22

# Tell Docker to run our script on startup
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
Note that the Tsinghua mirror is used.

start_lab.sh

Note that the host UID and GID are fetched so that matching can be made. 

start_lab.sh
#!/bin/bash

# Usage: ./start_lab.sh <host_username> <external_port>
TARGET_USER=$1
PORT=$2

if [ -z "$TARGET_USER" ] || [ -z "$PORT" ]; then
    echo "Usage: ./start_lab.sh <username> <port>"
    exit 1
fi

# Get IDs from the host system
U_ID=$(id -u $TARGET_USER)
G_ID=$(id -g $TARGET_USER)
U_HOME="/home/$TARGET_USER"

# Ensure the host directory exists
mkdir -p "$U_HOME"

# Change ownership to the target user 
# (This assumes the user exists on the CentOS host with the same name)
chown -R "$TARGET_USER:$TARGET_USER" "$U_HOME"

# Build/Refresh the image (optional, but ensures entrypoint updates are live)
# docker build -t lab_base_image .

docker run -d \
  --name "lab_$TARGET_USER" \
  --restart unless-stopped \
  -e HOST_UID=$U_ID \
  -e HOST_GID=$G_ID \
  -e USERNAME=$TARGET_USER \
  -p "$PORT:22" \
  -v "$U_HOME:/home/$TARGET_USER:z" \
  lab_base_image

echo "----------------------------------------------------"
echo "Container for $TARGET_USER is now LIVE."
echo "Access via: ssh $TARGET_USER@$(hostname -I | awk '{print $1}') -p $PORT"
echo "Work directory mapped to: $U_HOME"
echo "----------------------------------------------------"

Troubleshooting

  • "Command Not Found" when running scripts: Ensure you are using ./ to execute scripts in the current directory (e.g., ./start_lab.sh) and that the file has execution permissions (chmod +x).
  • Connection Refused on SSH:
    1. Verify the container is running: docker ps
    2. Verify the SSH service is active inside the container: docker exec <container_name> netstat -tulpn | grep 22
    3. Ensure the host firewall has the specific port opened (see step 4 above).

Disclaimer

This project was written under AI assistance.

This code is provided "as-is" without any warranty. Users are solely responsible for validating results for their specific applications. The author(s) are not liable for any errors, inaccuracies, or damages arising from the use of this software.

For critical research applications, please independently verify all measurements and consult established methodologies in the field.