https://docs.portainer.io/start/install/server/docker/linux Good Tutorials
Secure file download in PHP, Security question PHP, PHP MYSQL Interview Question -Books download - PHP solutions guidelines queries update, phpmysqlquestion
Thursday, December 1, 2022
Install Portainer with Docker on Linux
Building Docker Images with Dockerfiles
Building Docker Images with Dockerfiles
What is a Dockerfile?
- A Dockerfile is a text configuration file written using a special syntax
- It describes step-by-step instructions of all the commands you need to run to assemble a Docker Image.
- The
docker build
command processes this file generating a Docker Image in your Local Image Cache, which you can then start-up using thedocker run
command, or push to a permanent Image Repository.
Create a Dockerfile
Creating a Dockerfile is as easy as creating a new file named
“Dockerfile” with your text editor of choice and defining some
instructions. The name of the file
does not really matter. Dockerfile is the default name but you can use
any filename that you want (and even have multiple dockerfiles in the
same folder)
Simple Dockerfile for NGINX
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# # Each instruction in this file generates a new layer that gets pushed to your local image cache # # # Lines preceeded by # are regarded as comments and ignored # # # The line below states we will base our new image on the Latest Official Ubuntu FROM ubuntu:latest # # Identify the maintainer of an image LABEL maintainer="myname@somecompany.com" # # Update the image to the latest packages RUN apt-get update && apt-get upgrade -y # # Install NGINX to test. RUN apt-get install nginx -y # # Expose port 80 EXPOSE 80 # # Last is the actual command to start up NGINX within our Container CMD ["nginx", "-g", "daemon off;"] |
Dockerfile Commands
- ADD – Defines files to copy from the Host file system onto the Container
- ADD ./local/config.file /etc/service/config.file
- CMD – This is the command that will run when the Container starts
- CMD ["nginx", "-g", "daemon off;"]
- ENTRYPOINT – Sets the default application used
every time a Container is created from the Image. If used in conjunction
with CMD, you can remove the application and just define the arguments
there
- CMD Hello World!
- ENTRYPOINT echo
- ENV – Set/modify the environment variables within Containers created from the Image.
- ENV VERSION 1.0
- EXPOSE – Define which Container ports to expose
- EXPOSE 80
- FROM – Select the base image to build the new image on top of
- FROM ubuntu:latest
- LABEL maintainer – Optional field to let you
identify yourself as the maintainer of this image. This is just a label
(it used to be a dedicated Docker directive).
- LABEL maintainer=someone@xyz.xyz"
- RUN – Specify commands to make changes to your
Image and subsequently the Containers started from this Image. This
includes updating packages, installing software, adding users, creating
an initial database, setting up certificates, etc. These are the
commands you would run at the command line to install and configure your
application. This is one of the most important dockerfile directives.
- RUN apt-get update && apt-get upgrade -y && apt-get install -y nginx && rm -rf /var/lib/apt/lists/*
- USER – Define the default User all commands will be
run as within any Container created from your Image. It can be either a
UID or username
- USER docker
- VOLUME – Creates a mount point within the Container
linking it back to file systems accessible by the Docker Host. New
Volumes get populated with the pre-existing contents of the specified
location in the image. It is specially relevant to mention is that
defining Volumes in a Dockerfile can lead to issues. Volumes should be
managed with docker-compose or “docker run” commands. Volumes are
optional. If your application does not have any state (and most web
applications work like this) then you don’t need to use volumes.
- VOLUME /var/log
- WORKDIR – Define the default working directory for the command defined in the “ENTRYPOINT” or “CMD” instructions
- WORKDIR /home
Building and Testing Dockerfiles
There’s a free service that lets you quickly spin up Docker instances through a web interface called: “Play With Docker”
- First of all, head over to http://play-with-docker.com and start a new session. You need to create an account first.
- Once your session is active click on “Add New Instance”:
3. A new instance will start with a Docker Engine ready to accept commands
4. Next create/edit the Dockerfile. Run “vi Dockerfile”, press “i” to switch to “Insert Mode”, copy/paste the contents of our Dockerfile, press “Esc” to exit “Insert Mode”, and save+exit by typing “:x”
5. Build the new image using the command docker build <path>
. Path refers to the directory containing the Dockerfile.
6. At the end of the process you should see the message “Successfully built <image ID>”
7. Start the new image and test connectivity to NGINX. Run the command docker run -p 80:80 <image ID>
. The option -p 80:80
exposes the Container port 80 as the Host port 80 to the world
8. As a result a port 80 link should have become active next to the IP. Click on it to access your NGINX service
Building Docker images for your own applications
In the previous section we have seen an example Docker image for nginx. But what if you want to package your own application in a Docker image?
In this case you can create a Dockerfile in the same folder as your source code. Then put instructions in the dockerfile that mirror what you do locally on your workstation to compile/package the code.
The first step should be to find a public docker images that uses your programming language. Some examples are:
Once you find a proper base image you can use it to package your own application. Here is an example for Python
1 2 3 4 5 6 7 8 9 10 11 12 |
FROM python:3.6.4-alpine3.6 ENV FLASK_APP=minitwit COPY . /app WORKDIR /app RUN pip install --editable . RUN flask initdb EXPOSE 5000 CMD [ "flask", "run", "--host=0.0.0.0" ] |
Here is another example for Node.js
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
FROM node:10 WORKDIR /usr/src/app # Install app dependencies COPY package*.json ./ RUN npm install # Bundle app source COPY . . EXPOSE 8080 CMD [ "node", "server.js" ] |
How to create an optimized Docker image from your dockerfile
Once you become familiar with building docker images you also need to pay attention to two more topics
- Creating docker images with the smallest file size possible
- Using multi-stage builds in order to package only what is actually needed
For the first subject be sure to check out our Docker layer tutorial. For multi-stage builds see also our dedicated tutorial.
If you want to know all the best practices about creating and using dockerfiles in you team see our in-depth docker best practices guide.
Here is an example with a Node application that is using multi-stage builds:
1 2 3 4 5 6 7 8 9 10 11 |
FROM node:8.16 as build-deps WORKDIR /usr/src/app COPY package.json yarn.lock ./ RUN yarn COPY . ./ RUN yarn build FROM nginx:1.12-alpine COPY --from=build-deps /usr/src/app/build /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] |
Go have fun building your own Images!
For more examples of Dockerfile templates, login to Codefresh (it’s free), click add Repository and checkout the many templates and examples.
Ready to try Codefresh, the CI/CD platform for Docker/Kubernetes/Helm? Create Your Free Account Today!
Docker and iptables
Reference Site https://garutilorenzo.github.io/
A bash solution for docker and iptables conflic
If you’ve ever tried to setup firewall rules on the same machine where docker daemon is running you may have noticed that docker (by default) manipulate your iptables chains. If you want the full control of your iptables rules this might be a problem.
Docker and iptables
Docker is utilizing the iptables “nat” to resolve packets from and to its containers and “filter” for isolation purposes, by default docker creates some chains in your iptables setup:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-INGRESS (0 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
now for example we have the need to expose our nginx container to the world:
docker run --name some-nginx -d -p 8080:80 nginx:latest
47a12adff13aa7609020a1aa0863b0dff192fbcf29507788a594e8b098ffe47a
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47a12adff13a nginx:latest "/docker-entrypoint.…" 27 seconds ago Up 24 seconds 0.0.0.0:8080->80/tcp, :::8080->80/tcp some-nginx
and now we can reach our nginx default page:
curl -v http://192.168.25.200:8080
* Trying 192.168.25.200:8080...
* TCP_NODELAY set
* Connected to 192.168.25.200 (192.168.25.200) port 8080 (#0)
> GET / HTTP/1.1
> Host: 192.168.25.200:8080
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.1
< Date: Thu, 14 Oct 2021 10:31:38 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
< Connection: keep-alive
< ETag: "60e46fc5-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
...
* Connection #0 to host 192.168.25.200 left intact
NOTE the connection test is made using an external machine, not the same machine where the docker container is running.
The “magic” iptables rules added also allow our containers to reach the outside world:
docker run --rm nginx curl ipinfo.io/ip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 94 0 --:--:-- --:--:-- --:--:-- 94
1.2.3.4
Now check what happened to our iptables rules:
iptables -L
...
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http
...
a new rule is appeared, but is not the only rule added to our chains.
To get a more detailed view of our iptables chain we can dump the full iptables rules with iptables-save:
# Generated by iptables-save v1.8.4 on Thu Oct 14 12:32:46 2021
*mangle
:PREROUTING ACCEPT [33102:3022248]
:INPUT ACCEPT [33102:3022248]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [32349:12119113]
:POSTROUTING ACCEPT [32357:12120329]
COMMIT
# Completed on Thu Oct 14 12:32:46 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 12:32:46 2021
*nat
:PREROUTING ACCEPT [1:78]
:INPUT ACCEPT [1:78]
:OUTPUT ACCEPT [13:1118]
:POSTROUTING ACCEPT [13:1118]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Thu Oct 14 12:32:46 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 12:32:46 2021
*filter
:INPUT ACCEPT [4758:361293]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [4622:357552]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Thu Oct 14 12:32:46 2021
in our dump we can see some other rules added by docker:
DOCKER-INGRESS (nat table)
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
DOCKER-USER (filter table)
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
to explore in detail how iptables and docker work:
The problem
But what happen if we stop and restart our firewall?
systemctl stop ufw|firewalld # <- the service (ufw or firewalld) may change from distro to distro
systemctl stop ufw|firewalld
curl -v http://192.168.25.200:8080
* Trying 192.168.25.200:8080...
* TCP_NODELAY set
docker run --rm nginx curl ipinfo.io/ip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0
we can see that:
- our container is not reachable from the outside world
- our container is not able to reach internet
The solution
The solution for this problem is a simple bash script (combined to an awk script) to manage our iptables rules. In short the script parse the output of the iptables-save command and preserve a set of chains. The chains preserved are:
for table nat:
- POSTROUTING
- PREROUTING
- DOCKER
- DOCKER-INGRESS
- OUTPUT
for table filter:
- FORWARD
- DOCKER-ISOLATION-STAGE-1
- DOCKER-ISOLATION-STAGE-2
- DOCKER
- DOCKER-INGRESS
- DOCKER-USER
Install iptables-docker
The first step is to clone this repository
Local install (sh)
NOTE this kind of install use a static file (src/iptables-docker.sh). By default only ssh access to local machine is allowd. To allow specific traffic you have to edit manually this file with your own rules:
# Other firewall rules
# insert here your firewall rules
$IPT -A INPUT -p tcp --dport 1234 -m state --state NEW -s 0.0.0.0/0 -j ACCEPT
NOTE2 if you use a swarm cluster uncomment the lines under Swarm mode - uncomment to enable swarm access (adjust source lan) and adjust your LAN subnet
To install iptables-docker on a local machine, clone this repository and run sudo sh install.sh
sudo sh install.sh
Set iptables to iptables-legacy
Disable ufw,firewalld
Synchronizing state of ufw.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable ufw
Failed to stop firewalld.service: Unit firewalld.service not loaded.
Failed to disable unit: Unit file firewalld.service does not exist.
Install iptables-docker.sh
Create systemd unit
Enable iptables-docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/iptables-docker.service → /etc/systemd/system/iptables-docker.service.
start iptables-docker.service
Automated install (ansible)
You can also use ansible to deploy iptables-docker everywhere. To do this adjust the settings under group_vars/main.yml.
Label | Default | Description |
---|---|---|
docker_preserve |
yes |
Preserve docker iptables rules |
swarm_enabled |
no |
Tells to ansible to open the required ports for the swarm cluster |
ebable_icmp_messages |
yes |
Enable response to ping requests |
swarm_cidr |
192.168.1.0/24 |
Local docker swarm subnet |
ssh_allow_cidr |
0.0.0.0/0 |
SSH alloed subnet (default everywhere) |
iptables_allow_rules |
[] |
List of dict to dynamically open ports. Each dict has the following key: desc, proto, from, port. See group_vars/all.yml for examples |
iptables_docker_uninstall |
no |
Uninstall iptables-docker |
Now create the inventory (hosts.ini file) or use an inline inventory and run the playbook:
ansible-playbook -i hosts.ini site.yml
Usage
To start the service use:
sudo systemctl start iptables-docker
or
sudo iptables-docker.sh start
To stop the srevice use:
sudo systemctl stop iptables-docker
or
sudo iptables-docker.sh stop
Test iptables-docker
Now if you turn off the firewall with sudo systemctl stop iptables-docker and if you check the iptable-save output, you will see that the docker rules are still there:
sudo iptables-save
# Generated by iptables-save v1.8.4 on Thu Oct 14 15:52:30 2021
*mangle
:PREROUTING ACCEPT [346:23349]
:INPUT ACCEPT [346:23349]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [340:24333]
:POSTROUTING ACCEPT [340:24333]
COMMIT
# Completed on Thu Oct 14 15:52:30 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 15:52:30 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Thu Oct 14 15:52:30 2021
# Generated by iptables-save v1.8.4 on Thu Oct 14 15:52:30 2021
*filter
:INPUT ACCEPT [357:24327]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [355:26075]
:DOCKER - [0:0]
:DOCKER-INGRESS - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Thu Oct 14 15:52:30 2021
our container is still accesible form the outside:
curl -v http://192.168.25.200:8080
* Trying 192.168.25.200:8080...
* TCP_NODELAY set
* Connected to 192.168.25.200 (192.168.25.200) port 8080 (#0)
> GET / HTTP/1.1
> Host: 192.168.25.200:8080
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.1
< Date: Thu, 14 Oct 2021 13:53:33 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
< Connection: keep-alive
< ETag: "60e46fc5-264"
< Accept-Ranges: bytes
and our container can reach internet:
docker run --rm nginx curl ipinfo.io/ip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 94 0 --:--:-- --:--:-- --:--:-- 94
my-public-ip-address
Important notes
Before install iptables-docker please read this notes:
- both local instal and ansible install configure your system to use iptables-legacy
- by default only port 22 is allowed
- ufw and firewalld will be permanently disabled
- filtering on all docker interfaces is disabled
Docker interfaces are:
- vethXXXXXX interfaces
- br-XXXXXXXXXXX interfaces
- docker0 interface
- docker_gwbridge interface
Extending iptables-docker
You can extend or modify iptables-docker by editing:
- src/iptables-docker.sh for the local install (sh)
- roles/iptables-docker/templates/iptables-docker.sh.j2 template file for the automated install (ansible)
Uninstall
Local install (sh)
Run uninstall.sh
Automated install (ansible)
set the variable “iptables_docker_uninstall” to “yes” into group_vars/all.yml and run the playbook.
Logrotate daily or periodically
The logrotate
utility is great at managing logs. It can rotate them, compress them,
email them, delete them, archive them, and start fresh ones when you
need.
Running logrotate
is pretty simple: just run logrotate -vs state-file config-file
.
In the above command, the v
option enables verbose mode, s
specifies a state file, and the final config-file
mentions the configuration file, where you specify what you need done.
Basic usage
- Run all
- logrotate -v /etc/logrotate.conf
- Test one config
- logrotate --debug /etc/logrotate.d/symfony
- Run one config
- logrotate -v /etc/logrotate.d/symfony
-
Force rotation
-
logrotate --force /etc/logrotate.d/symfony
-
Introduction
Logrotate is a system utility that manages the automatic rotation and compression of log files. If log files were not rotated, compressed, and periodically pruned, they could eventually consume all available disk space on a system.
Logrotate is installed by default on Ubuntu, and is set up to
handle the log rotation needs of all installed packages, including rsyslog
, the default system log processor.
In this article, we will explore the default Logrotate configuration, then configure log rotation for a fictional custom application.
Prerequisites
This tutorial assumes you have an Ubuntu 16.04 server, with a non-root sudo-enabled user
Logrotate is available on many other Linux distributions as well, but the default configuration may be quite different. Other sections of this tutorial will still apply as long as your version of Logrotate is similar to Ubuntu 16.04’s. Follow Step 1 to determine your Logrotate version.
Log into your server as your sudo-enabled user to begin.
Confirming Your Logrotate Version
If you’re using a non-Ubuntu server, first make sure Logrotate is installed by asking for its version information:
- logrotate --version
Outputlogrotate 3.8.7
If Logrotate is not installed you will get an error. Please install the software using your Linux distribution’s package manager.
If Logrotate is installed but the version number is significantly
different, you may have issues with some of the configuration discussed
in this tutorial. Refer to the documentation for your specific version
of Logrotate by reading its man
page:
- man logrotate
Next we’ll look at Logrotate’s default configuration structure on Ubuntu.
Exploring the Logrotate Configuration
Logrotate’s configuration information can generally be found in two places on Ubuntu:
/etc/logrotate.conf
: this file contains some default settings and sets up rotation for a few logs that are not owned by any system packages. It also uses aninclude
statement to pull in configuration from any file in the/etc/logrotate.d
directory./etc/logrotate.d/
: this is where any packages you install that need help with log rotation will place their Logrotate configuration. On a standard install you should already have files here for basic system tools likeapt
,dpkg
,rsyslog
and so on.
By default, logrotate.conf
will configure weekly log rotations (weekly
), with log files owned by the root user and the syslog group (su root syslog
), with four log files being kept (rotate 4
), and new empty log files being created after the current one is rotated (create
).
Let’s take a look at a package’s Logrotate configuration file in /etc/logrotate.d
. cat
the file for the apt
package utility:
- cat /etc/logrotate.d/apt
Output/var/log/apt/term.log {
rotate 12
monthly
compress
missingok
notifempty
}
/var/log/apt/history.log {
rotate 12
monthly
compress
missingok
notifempty
}
This file contains configuration blocks for two different log files in the /var/log/apt/
directory: term.log
and history.log
.
They both have the same options. Any options not set in these
configuration blocks will inherit the default values or those set in /etc/logrotate.conf
. The options set for the apt
logs are:
rotate 12
: keep twelve old log files.monthly
: rotate once a month.compress
: compress the rotated files. this usesgzip
by default and results in files ending in.gz
. The compression command can be changed using thecompresscmd
option.missingok
: don’t write an error message if the log file is missing.notifempty
: don’t rotate the log file if it is empty.
There are many more configuration options available. You can read about all of them by typing man logrotate
on the command line to bring up Logrotate’s manual page.
Next, we’ll set up a configuration file to handle logs for a fictional service.
Setting Up an Example Config
To manage log files for applications outside of the pre-packaged and pre-configured system services, we have two options:
- Create a new Logrotate configuration file and place it in
/etc/logrotate.d/
. This will be run daily as the root user along with all the other standard Logrotate jobs. - Create a new configuration file and run it outside of Ubuntu’s
default Logrotate setup. This is only really necessary if you need to
run Logrotate as a non-root user, or if you want to rotate logs more frequently than daily (an
hourly
configuration in/etc/logrotate.d/
would be ineffective, because the system’s Logrotate setup only runs once a day).
Let’s walk through these two options with some example setups.
Adding Configuration to /etc/logrotate.d/
We want to configure log rotation for a fictional web server that puts an access.log
and error.log
into /var/log/example-app/
. It runs as the www-data
user and group.
To add some configuration to /etc/logrotate.d/
, first open up a new file there:
- sudo nano /etc/logrotate.d/example-app
Here is an example config file that could handle these logs:
/var/log/example-app/*.log {
daily
missingok
rotate 14
compress
notifempty
create 0640 www-data www-data
sharedscripts
postrotate
systemctl reload example-app
endscript
}
Some of the new configuration directives in this file are:
create 0640 www-data www-data
: this creates a new empty log file after rotation, with the specified permissions (0640
), owner (www-data
), and group (alsowww-data
).sharedscripts
: this flag means that any scripts added to the configuration are run only once per run, instead of for each file rotated. Since this configuration would match two log files in theexample-app
directory, the script specified inpostrotate
would run twice without this option.postrotate
toendscript
: this block contains a script to run after the log file is rotated. In this case we’re reloading our example app. This is sometimes necessary to get your application to switch over to the newly created log file. Note thatpostrotate
runs before logs are compressed. Compression could take a long time, and your software should switch to the new logfile immediately. For tasks that need to run after logs are compressed, use thelastaction
block instead.
After customizing the config to fit your needs and saving it in /etc/logrotate.d
, you can test it by doing a dry run:
- sudo logrotate /etc/logrotate.conf --debug
This calls logrotate
, points it to the standard configuration file, and turns on debug mode.
Information will print out about which log files Logrotate is handling and what it would have done to them. If all looks well, you’re done. The standard Logrotate job will run once a day and include your new configuration.
Next, we’ll try a setup that doesn’t use Ubuntu’s default configuration at all.
Creating an Independent Logrotate Configuration
In this example we have an app running as our user sammy, generating logs that are stored in /home/sammy/logs/
. We want to rotate these logs hourly, so we need to set this up outside of the /etc/logrotate.d
structure provided by Ubuntu.
First, we’ll create a configuration file in our home directory. Open it in a text editor:
- nano /home/sammy/logrotate.conf
Then paste in the following configuration:
/home/sammy/logs/*.log {
hourly
missingok
rotate 24
compress
create
}
Save and close the file. We’ve seen all these options in previous steps, but let’s summarize: this configuration will rotate the files hourly, compressing and keeping twenty-four old logs and creating a new log file to replace the rotated one.
You’ll need to customize the configuration to suit your application, but this is a good start.
To test that it works, let’s make a log file:
- cd ~
- mkdir logs
- touch logs/access.log
Now that we have a blank log file in the right spot, let’s run the logrotate
command.
Because the logs are owned by sammy we don’t need to use sudo
. We do need to specify a state file though. This file records what logrotate
saw and did last time it ran, so that it knows what to do the next time
it runs. This is handled for us when using the Ubuntu Logrotate setup
(it can be found at /var/lib/logrotate/status
), but we need to do it manually now.
We’ll have Logrotate put the state file right in our home directory for this example. I can go anywhere that’s accessible and convenient:
logrotate /home/sammy/logrotate.conf --state /home/sammy/logrotate-state --verbose
Outputreading config file /home/sammy/logrotate.conf
Handling 1 logs
rotating pattern: /home/sammy/logs/*.log hourly (24 rotations)
empty log files are rotated, old logs are removed
considering log /home/sammy/logs/access.log
log does not need rotating
--verbose
will print out detailed information about what
Logrotate is doing. In this case it looks like it didn’t rotate
anything. This is Logrotate’s first time seeing this log file, so as far
as it knows, the file is zero hours old and it shouldn’t be rotated.
If we look at the state file, we’ll see that Logrotate recorded some information about the run:
- cat /home/sammy/logrotate-state
Outputlogrotate state -- version 2
"/home/sammy/logs/access.log" 2017-11-7-19:0:0
Logrotate noted the logs that it saw and when it last considered them for rotation. If we run this same command one hour later, the log will be rotated as expected.
If you want to force Logrotate to rotate the log file when it otherwise would not have, use the --force
flag:
- logrotate /home/sammy/logrotate.conf --state /home/sammy/logrotate-state --verbose --force
This is useful when testing postrotate
and other scripts.
Finally, we need to set up a cron job to run Logrotate every hour. Open your user’s crontab:
- crontab -e
This will open a up a text file. There may be some comments already in the file that explain the basic syntax expected. Move the cursor down to a new blank line at the end of the file and add the following:
crontab14 * * * * /usr/sbin/logrotate /home/sammy/logrotate.conf --state /home/sammy/logrotate-state
This task will run on the 14th minute of every hour, every day. It runs basically the same logrotate
command we ran previously, though we expanded logrotate
to its full path of /usr/sbin/logrotate
just to be safe. It’s good practice to be as explicit as possible when writing cron jobs.
Save the file and exit. This will install the crontab and our task will run on the specified schedule.
If we revisit our log directory in about an hour we should find the rotated and compressed log file access.log.1.gz
(or .2.gz
if you ran Logrotate with the --force
flag).
Conclusion
In this tutorial we verified our Logrotate version, explored the
default Ubuntu Logrotate configuration, and set up two different types
of custom configurations. To learn more about the command line and
configuration options available for Logrotate, you can read its manual
page by running man logrotate
in your terminal.
How to solve mysql ERROR 1118 (42000) Row size too large
I had this issue with MYSQL 5.7 . The following worked althoug...
-
PHP has an SSH2 library which provides access to resources (shell, remote exec, tunneling, file transfer) on a remote ma...
-
Which pillar of the AWS Well-Architected Framework recommends maintaining infrastructure as code? Operational Excellence Which of the foll...
-
Introduction to PHP PDO (PHP Data Objects) 1. What is PDO 2. What Databases does PDO support 3. Where do I begin? 4. Connect to ...