Configuring a Sharded Cluster in MongoDB
Initial Server Setup & Tuning
Before deploying MongoDB, it is critical to tune the Linux operating system for optimal performance and stability. This guide covers the essential OS-level configurations for Ubuntu.
Step 1: Initial Login & User Creation
First, login as root and create a dedicated user for MongoDB.
# 1. Login to your server ssh root@your_server_ip # 2. Create the mongouser adduser mongouser # Set password when prompted
Step 2: Add Hostnames
Ensure all nodes in your cluster can resolve each other by name. Edit /etc/hosts on every node.
sudo -i # Edit /etc/hosts sudo nano /etc/hosts # Add cluster node IPs: 192.168.0.111 db1 192.168.0.112 db2 192.168.0.113 db3
Step 3: Increase OS Limits
MongoDB requires high limits for open files (`nofile`) and processes (`nproc`).
# 1. Edit main limits config sudo nano /etc/security/limits.conf # Add these lines to the end: * soft nofile 64000 * hard nofile 64000 * soft nproc 32000 * hard nproc 32000 # 2. Edit nproc limit config sudo nano /etc/security/limits.d/90-nproc.conf # Add/Update these lines: * soft nproc 32000 * hard nproc 32000
Step 4: Disable Transparent Huge Pages (THP)
THP can degrade database performance. We will disable it permanently using an init script.
# Create the script
sudo nano /etc/init.d/disable-transparent-hugepages
# Paste this content:
#!/bin/sh
### BEGIN INIT INFO
# Provides: disable-transparent-hugepages
# Required-Start: $local_fs
# Required-Stop:
# X-Start-Before: mongod mongodb-mms-automation-agent
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Disable Linux transparent huge pages
# Description: Disable Linux transparent huge pages, to improve
# database performance.
### END INIT INFO
case $1 in
start)
if [ -d /sys/kernel/mm/transparent_hugepage ]; then
thp_path=/sys/kernel/mm/transparent_hugepage
elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
thp_path=/sys/kernel/mm/redhat_transparent_hugepage
else
return 0
fi
echo 'never' > ${thp_path}/enabled
echo 'never' > ${thp_path}/defrag
unset thp_path
;;
esacMake the script executable and enable it on boot:
# Make executable sudo chmod 755 /etc/init.d/disable-transparent-hugepages # Register start script sudo update-rc.d disable-transparent-hugepages defaults
Step 5: Turn Off Core Dumps
Disable core dumps in the apport configuration.
sudo nano /etc/default/apport # Change enabled=1 to 0 enabled=0
Step 6: Configure Filesystem (EBS/NVMe)
Prepare your data volume. We recommend XFS for MongoDB.
# 1. Identify the volume sudo lsblk # (Assume device is /dev/nvme1n1) # 2. Format as XFS sudo mkfs.xfs /dev/nvme1n1 # If already formatted, skip this step. # 3. Create Mount Point sudo mkdir -p /mongodata # 4. Mount Volume sudo mount /dev/nvme1n1 /mongodata # 5. Get UUID for persistent mounting sudo blkid /dev/nvme1n1 # Copy the UUID, e.g., "8f64a5ad-bd72-44d0-86b1-d00ade8ce958" # 6. Edit fstab sudo nano /etc/fstab # Add this line at the end: UUID=8f64a5ad-bd72-44d0-86b1-d00ade8ce958 /mongodata xfs defaults,noatime,discard 0 0 # 7. Verify mount sudo mount -a # 8. Set Permissions sudo chown -R mongouser:mongouser /mongodata
Step 7: Configure Read Ahead
Set the read-ahead block size to 32 (16KB) for optimal random I/O performance.
# Check current value sudo blockdev --getra /dev/nvme1n1 # If value < 32, set it: /sbin/blockdev --setra 32 /dev/nvme1n1 # Configure Cron for reboot persistence sudo crontab -e # Add line: @reboot /sbin/blockdev --setra 32 /dev/nvme1n1
Step 8: Reboot and Verify
Reboot the server and verify all configurations.
sudo reboot # --- Verification --- # 1. Check Limits (Should be 32000 / 64000) ulimit -u ulimit -n # 2. Check THP (Should select [never]) cat /sys/kernel/mm/transparent_hugepage/enabled cat /sys/kernel/mm/transparent_hugepage/defrag # 3. Check Filesystem (Should show noatime) cat /proc/mounts | grep noatime # 4. Check Read Ahead (Should be >= 32) sudo blockdev --getra /dev/nvme1n1
Part 2: Software Installation & Configuration
With the operating system tuned, we can now proceed to install the MongoDB server software and the MongoDB Shell (mongosh) on all nodes in the cluster.
Step 1: Install Required OS Packages
Install the necessary dependencies for MongoDB 8.0 on Ubuntu.
sudo apt-get update sudo apt-get install -y libcurl4 libgssapi-krb5-2 libldap-2.5-0 libwrap0 libsasl2-2 libsasl2-modules libsasl2-modules-gssapi-mit openssl liblzma5
Step 2: Download MongoDB and Mongosh
Download the MongoDB Server and Shell binaries for your architecture (ARM64 or x86_64).
# For ARM64 (Ubuntu 24.04) wget https://fastdl.mongodb.org/linux/mongodb-linux-aarch64-ubuntu2404-8.0.4.tgz wget https://downloads.mongodb.com/compass/mongosh-2.3.4-linux-arm64.tgz # For x86_64 (Ubuntu 22.04) # wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu2204-8.0.4.tgz # wget https://downloads.mongodb.com/compass/mongosh-2.3.4-linux-x64.tgz
Step 3: Install Software & Create MongoDB User
Extract the binaries and set up the directory structure in /opt.
# 1. Prepare /opt directory sudo mkdir -p /opt cd /opt # 2. Extract files sudo tar -xvzf ~/mongodb-linux-aarch64-ubuntu2404-8.0.4.tgz sudo tar -xvzf ~/mongosh-2.3.4-linux-arm64.tgz # 3. Create symlinks for easier management sudo ln -s mongodb-linux-aarch64-ubuntu2404-8.0.4 mongodb sudo ln -s mongosh-2.3.4-linux-arm64 mongosh # 4. Create necessary directories sudo mkdir -p /opt/mongodb/config sudo mkdir -p /opt/mongodb/log_mongos sudo mkdir -p /opt/mongodb/log_shard_svr sudo mkdir -p /opt/mongodb/log_cfg_svr sudo mkdir -p /opt/mongodb/data_cfg_svr sudo mkdir -p /opt/mongodb/data_shard_svr # 5. Set up the mongouser (if not already created in Part 1) # sudo adduser mongouser # 6. Set permissions sudo chown -R mongouser:mongouser /opt/mongodb-linux-aarch64-ubuntu2404-8.0.4 sudo chown -R mongouser:mongouser /opt/mongosh-2.3.4-linux-arm64 sudo chown -R mongouser:mongouser /opt/mongodb sudo chown -R mongouser:mongouser /opt/mongosh
Update the mongouser profile to include MongoDB binaries in the PATH.
# Switch to mongouser su - mongouser # Edit .bashrc nano ~/.bashrc # Add these lines at the end: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/mongodb/bin export PATH=$PATH:/opt/mongodb/bin:/opt/mongosh/bin # Apply changes source ~/.bashrc # Verify installation mongod --version mongosh --version
Step 4: Copy Config & Script Files
Modify the bindIp settings in your configuration files and distribute them to all nodes.
# Copy cfg_svr.conf and shard_svr.conf to /opt/mongodb/config # Ensure bindIp is set to the node's internal IP # Copy the scripts folder to mongouser's home directory # These scripts will be used for cluster initialization
Step 5: Configure Mongos (Query Router)
The mongos router can be started on application systems or dedicated nodes. For optimal performance, limit the number of active mongos instances to 10-15 per cluster.
# Copy mongos.conf to /opt/mongodb/config # This file defines the connection to the Config Server replica set
Part 3: Cluster Initialization & Automation
Cluster IP Assignments
- 192.168.0.111 - db1 (mongo1)
- 192.168.0.112 - db2 (mongo2)
- 192.168.0.113 - db3 (mongo3)
- 192.168.0.114 - db4 (Failover only)
Step 1: Start Config and Shard Servers
Log in as mongouser on all nodes (db1 to db3) and start the services using the provided scripts.
# 1. Start Config Servers (Port 37017) cd ~/scripts chmod +x ./start_cfg_svr.sh ./start_cfg_svr.sh # 2. Start Shard Servers (Port 27017) chmod +x ./start_shard_svr.sh ./start_shard_svr.sh
Step 2: Initialize Config Server Replication
From db1, connect to the config server and initiate the replica set.
mongosh --host mongo1 --port 37017
rs.initiate({
_id: "rSetCFG",
configsvr: true,
members: [
{ _id: 0, host: "mongo1:37017" },
{ _id: 1, host: "mongo2:37017" },
{ _id: 2, host: "mongo3:37017" }
]
})
rs.status()Step 3: Initialize Shard Server Replication
Similarly, initiate the shard replica set from db1.
mongosh --host mongo1 --port 27017
rs.initiate({
_id: "rSetSVR",
members: [
{ _id: 0, host: "mongo1:27017" },
{ _id: 1, host: "mongo2:27017" },
{ _id: 2, host: "mongo3:27017" }
]
})
rs.status()Step 4: Configure Mongos Router
Start the mongos process and add the shard replica set to the cluster.
# 1. Start Mongos
mongos --config /opt/mongodb/config/mongos.conf
# 2. Connect to Mongos (default port 47017)
mongosh --port 47017
# 3. Add the shard
sh.addShard("rSetSVR/mongo1:27017,mongo2:27017,mongo3:27017")
# 4. Verify status
sh.status()Step 5: Verification & Testing
Create a test database and collection through the mongos shell.
use emp
db.emp.insertOne({ empno: 101, name: "Mukesh" })
db.emp.insertOne({ empno: 102, name: "Barapatre" })
db.emp.find()Step 6: Production Setup (External Volumes)
For production, ensure data is stored on external volumes with strict permissions.
# 1. Create directories
sudo mkdir -p /mongodata/configserver/{db,log,log_cfg_svr}
sudo mkdir -p /mongodata/shard/{db,log,log_shard_svr}
sudo mkdir -p /mongodata/mongos/log
# 2. Set permissions
sudo chown -R mongouser:mongouser /mongodata
sudo chmod -R 750 /mongodataStep 7: Automating with Systemd
Create systemd service files for each component to ensure they start automatically on boot.
Config Server Service
# /etc/systemd/system/mongoconfigserver.service [Unit] Description=MongoDB Config Server After=network.target [Service] User=mongouser Group=mongouser ExecStart=/opt/mongodb/bin/mongod --config /opt/mongodb/config/cfg_svr.conf ExecStop=/opt/mongodb/bin/mongod --shutdown --config /opt/mongodb/config/cfg_svr.conf Restart=always TimeoutStopSec=60 [Install] WantedBy=multi-user.target
Shard Server Service
# /etc/systemd/system/mongoshardserver.service [Unit] Description=MongoDB Shard Server After=network.target [Service] User=mongouser Group=mongouser ExecStart=/opt/mongodb/bin/mongod --config /opt/mongodb/config/shard_svr.conf ExecStop=/opt/mongodb/bin/mongod --shutdown --config /opt/mongodb/config/shard_svr.conf Restart=always TimeoutStopSec=60 PIDFile=/mongodata/shard/mongod.lock [Install] WantedBy=multi-user.target
Mongos Router Service
# /etc/systemd/system/mongosrouter.service
[Unit]
Description=MongoDB Mongos Router
After=network.target
[Service]
User=mongouser
Group=mongouser
ExecStart=/opt/mongodb/bin/mongos --config /opt/mongodb/config/mongos.conf
ExecStop=/opt/mongodb/bin/mongos --port 47017 --eval "db.getSiblingDB('admin').shutdownServer()"
Restart=always
LimitNOFILE=64000
TimeoutStopSec=60
PIDFile=/mongodata/mongos/log/mongos.pid
[Install]
WantedBy=multi-user.targetStep 8: Service Management
Use standard systemctl commands to manage your cluster services.
# Reload daemon after changes sudo systemctl daemon-reload # Start and Enable sudo systemctl start mongoconfigserver mongoshardserver mongosrouter sudo systemctl enable mongoconfigserver mongoshardserver mongosrouter # Check status sudo systemctl status mongoconfigserver mongoshardserver mongosrouter # View logs journalctl -u mongoconfigserver -f
Important Lessons
- Always ensure firewall (UFW) is configured or disabled for cluster communication:
sudo ufw disable. - Ownership is critical:
sudo chown -R mongouser:mongouser /mongodata. - Only create databases and collections through the mongos shell to ensure metadata is correctly stored in config servers.

