Health checks 404'ing

Hi,

I’m using the php-basic as the basis for a Laravel deployment. Unfortunately the ELB health checks all 404 resulting in the tasks being respun up consistently. For the short while that it is running the app’s url throws 503’s but there are no logs showing what went wrong.

Here is the Dockerfile for Nginx:

FROM nginx:1.17

# Install curl for health check
RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -y curl

# Configure NGINX
COPY docker/nginx/default.conf /etc/nginx/conf.d/default.conf

HEALTHCHECK --interval=15s --timeout=10s --start-period=60s --retries=2 CMD curl -f http://127.0.0.1/health-check.php || exit 1

And this is php-fpm’s

FROM php:7.2-fpm

# Install system dependencies
RUN apt-get update && apt-get install -y \
    git \
    curl \
    libpng-dev \
    libonig-dev \
    libxml2-dev \
    zip \
    unzip

# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

RUN export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" \
    && apt-get update \
    && apt-get install -y --no-install-recommends \
        libmagickwand-dev \
    && rm -rf /var/lib/apt/lists/* \
    && pecl install imagick-3.4.4 \
    && docker-php-ext-enable imagick

# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd zip

# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer

RUN mkdir -p /home/www-data/.composer

COPY --chown=www-data:www-data . /var/www/

# Set working directory
WORKDIR /var/www

HEALTHCHECK --interval=15s --timeout=10s --start-period=60s --retries=2 CMD curl -f http://127.0.0.1/health-check.php || exit 1

EXPOSE 9000
# Start PHP FPM
CMD ["php-fpm"]

The Nginx default.conf file looks like this:

server {
    listen 80;
    index index.php index.html;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /var/www/public;
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
    location / {
        try_files $uri $uri/ /index.php?$query_string;
        gzip_static on;
    }
}

I’m at a loss as to what is going on. Anyone have some thoughts?

Hi Jai,

Thanks for posting your question! Glad to hear you started the journey of containerizing your Laravel application.

Did you verify, that /var/www contains the health-check.php file already? If not, use the following commands to do so.

docker run -it --entrypoint bash <INSERT PHP-FPM IMAGE>
ls -la /var/www

Are you pushing the Docker image manually or are you using the pipeline already?

Thanks,
Andreas

Hi, thanks for you reply.

The local docker build is slightly different. I found when using the COPY command in the Dockerfile locally, local file changes didn’t end up in the container without a rebuild so I was using the volumes option in docker-compose locally and using the above Dockerfiles for the build and push through the AWS ECR.

The Laravel app root dir has a public dir in it which is what the Nginx conf file is pointing to. The health-check.php file is in that public dir.

When I go to run the docker command you suggested I get, /bin/ls: /bin/ls: cannot execute binary file

It would seem there is something wrong with the php-fpm image build…

If I run the image with docker run --publish 9000:9000 --name php php-fpm-aws-image
I get the following logged which looks promising.

[18-Sep-2020 09:05:54] NOTICE: fpm is running, pid 1
[18-Sep-2020 09:05:54] NOTICE: ready to handle connections

I managed to get onto the running image locally using docker exec -it my_container /bin/bash. From there I was able to verify that the health-check.php file is in /var/www/public:

root@c2a23b932999:/var/www/public# ls -al
total 3820
drwxr-xr-x  8 www-data www-data    4096 Aug 27 04:18  .
drwxr-xr-x  1 root     root        4096 Sep 18 09:01  ..
-rwxr-xr-x  1 www-data www-data     593 Aug 17 23:26  .htaccess
drwxr-xr-x 12 www-data www-data    4096 Aug 17 23:26  assets
drwxr-xr-x  2 www-data www-data    4096 Aug 17 23:26  css
-rwxr-xr-x  1 www-data www-data       0 Aug 17 23:26  favicon.ico
-rw-r--r--  1 www-data www-data       3 Aug 27 03:15  health-check.php
drwxr-xr-x 12 www-data www-data    4096 Aug 17 23:26  images
-rwxr-xr-x  1 www-data www-data    1823 Aug 17 23:26  index.php
drwxr-xr-x  2 www-data www-data    4096 Aug 17 23:26  js
-rw-r--r--  1 www-data www-data      71 Aug 17 23:26  mix-manifest.json
-rwxr-xr-x  1 www-data www-data      24 Aug 17 23:26  robots.txt
drwxr-xr-x  6 www-data www-data    4096 Aug 17 23:26  storage
drwxr-xr-x  4 www-data www-data    4096 Aug 17 23:26  vendor

I have now got both of the above Dockerfiles running locally and linked, and have been able to reproduce the issue I’m having in the AWS stack. I linked the two images with this command docker run --link php:php --publish 8000:80 --name nginx nginx-aws-image.

Nginx is logging the 404’s

127.0.0.1 - - [18/Sep/2020:10:03:49 +0000] "GET /health-check.php HTTP/1.1" 404 154 "-" "curl/7.64.0"
2020/09/18 10:03:49 [info] 6#6: *35 client 127.0.0.1 closed keepalive connection
127.0.0.1 - - [18/Sep/2020:10:04:04 +0000] "GET /health-check.php HTTP/1.1" 404 154 "-" "curl/7.64.0"
2020/09/18 10:04:04 [info] 6#6: *36 client 127.0.0.1 closed keepalive connection
127.0.0.1 - - [18/Sep/2020:10:04:19 +0000] "GET /health-check.php HTTP/1.1" 404 154 "-" "curl/7.64.0"
2020/09/18 10:04:19 [info] 6#6: *37 client 127.0.0.1 closed keepalive connection

Since Laravel does all the static file serving too, I thought it might have been due to the fact that the Nginx container didn’t have a /var/www/html dir as stated as root in the conf but adding WORKDIR /var/www/public to the Nginx Dockerfile didn’t fix it, though it does create the directory in the container.

Further info.

I have managed to get nginx to output this log error

2020/09/20 03:13:34 [error] 7#7: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /health-check.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"

I have checked the Docker container in the php-fpm container and it is has the standard set up of the zz-docker.conf file with the

[global]
daemonize = no

[www]
listen = 9000

I even tried add listen.allowed_clients = 127.0.0.1 to it too with no luck. Can’t see why php-fpm would be refusing the connection??

Update.

This randomly started working in AWS but not locally still.

For local you can’t use 127.0.0.1

Inside your nginx config, in the fastcgi_pass line, replace 127.0.0.1 with the name of your php-fpm container (the name you gave to it inside your docker-compose.yml)

For example:

fastcgi_pass php:9000;

(if your container is named “php”)

Sorry, I’m not sure that’s right. Especially when the book has it as 12.0.0.1

If I run the php-basic app as per the book, with fastcgi_pass 127.0.0.1:9000; it works fine.

You have to use 127.0.0.1 and not the container name. This is how Fargate networking works and we replicated this behaviour to docker compose(see network_mode).

Yeah, for fargate you have to use 127.0.0.1 - but for local development (i.e. on your machine, using docker), you got to use the container name, as docker will not put your nginx and your php-fpm on the same vm.

not when you use the docker compose setup we have in place :slight_smile:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.