Confluence example?

sorry save me starting a new thread! trying to now get confluence running via the same mercanics.

Ive had a failure on:

Parameters: [AppEnvironment7Value, AppEnvironment7Key] do not exist in the template

But i do have it, i was wondering if its because ive done the string join wrong in AppEnv4?

AppEnvironment1Key: ‘ATL_PROXY_NAME’ # optional
AppEnvironment1Value: !GetAtt ‘Alb.Outputs.StackName
AppEnvironment2Key: ‘ATL_PROXY_PORT’ # optional
AppEnvironment2Value: ‘80’ # optional
AppEnvironment3Key: ‘ATL_TOMCAT_SCHEME’ # optional
AppEnvironment3Value: ‘http’ # optional
AppEnvironment4Key: ‘ATL_JDBC_URL’ # optional
AppEnvironment4Value: !Join [’’, [‘jdbc:mysql://’, !GetAtt ‘Database.Outputs.DnsName’ , ‘:5432/confluence’]]
AppEnvironment5Key: ‘ATL_JDBC_USER’ # optional
AppEnvironment5Value: ‘postgres’ # optional
AppEnvironment6Key: ‘ATL_JDBC_PASSWORD’ # optional
AppEnvironment6Value: ‘mysecretpassword’ # optional
AppEnvironment7Key: ‘ATL_DB_TYPE’ # optional
AppEnvironment7Value: ‘postgresql’ # optional

Only 6 env vars are supported.

I see a bunch of static values in your example. You could set them in your Dockerfile like this:

ENV ATL_DB_TYPE postgresql

Btw: Instead of

AppEnvironment4Value: !Join [’’, [‘jdbc:mysql://’, !GetAtt ‘Database.Outputs.DnsName’ , ‘:5432/confluence’]]

you could try (and you better replace mysql with the postgres variant?)

AppEnvironment4Value: !Sub ' jdbc:mysql://${Database.Outputs.DnsName}:5432/confluence'


I was just pulling the latest offical image from atlassian for it. rather annoying to pull it down just to add a few variables and have to push it back.

will try it shortly

We hear you. Unfortunately, CloudFormation limits to total number of parameters to 60 and we are very close to that limit already.

Thanks for the help!

i dont see a cfn module for ecr how do you like to manage them out of interest? im just using the aws-cli for POC but was hoping to standardize all IaC.

You could create an ECR repo with plain CloudFormation:

But, the lifecylce is problematic: you cannot delete an repo that contains images.

Appricate this is rather off track, but any tips on how to troubleshoot would be welcome!

i now get a completed deployment, but back to the containers looping shutdowns and a 502 error. Where as the logs showed nice clear errors in Wordpress. I get no errors here. If i click on a task container log, it spins through the same set up text then just terminates at the end! I dont understand the log system as i expected it to be a log per container (so each log would look the same, start up -> terminate -> end. But the terminate shows up at different parts of different logs. I tried looking in cloudwatch to see if that helped. and that gui i found even more confusing!

Is there a good place to look and try and work through it? any suggested steps? So far ive tried upping the container mem and cpu to 4gb/ 2cpu to see if that would solve it. but no joy…

the logs, show the container setting up conflunce showing it completing then:


2020-02-19 11:28:06,646 INFO [Catalina-utility-1] [atlassian.confluence.cluster.DefaultClusterConfigurationHelper] lambda$saveSetupConfigIntoSharedHome$9 Finished writing setup configuration into shared home

2020-02-19 11:28:06,646 INFO [Catalina-utility-1] [atlassian.confluence.cluster.DefaultClusterConfigurationHelper] lambda$saveSetupConfigIntoSharedHome$9 Finished writing setup configuration into shared home



Session terminated, terminating shell... ...terminated.

Session terminated, terminating shell... ...terminated.

code im using is here if that sheads any light on silly mistakes!

can you check the status of the containers that are stopped?


Stopped reason

Task failed ELB health checks in (target-group arn:aws:elasticloadbalancing:us-east-1:794961467219:targetgroup/farga-Targe-1IMU6GHRGLYJW/f6663bf5cc727fb6)

So based on my search this is normally due to the config of the health check or ports not being open.

I looked at my docker-compose file which uses nginx as a proxy and thought maybe my issue was because i didnt have the default confluence port open from fargate to the alb. But ie now added the Port: ‘8090’ for the alb listener and still no joy!

I think this is now a networking issue which is rather my worst nightmare…

If the app talks on port 8090 , should i set:

service -> appport:8090 (instead of my current 80).
The alb listen to 8090.

shouldnt i also need to set the fargate sg to have that port open as well (or does the service -> appport do that?

Again i patched the nginx build together so not sure what the nginx code does. my docker-compose file is here: if that might shed light on the more advanced than i!



site loads (not correctly but getting closer!)

I updated the AppPort and ALB port to 8090, then added :8090 into the end ALB url and bingo was his nameo!

1 Like

If I get your use case right you wanna do something like this:

    Type: 'AWS::CloudFormation::Stack'
        ProxyImage: 'nginx:latest'
        ProxyPort: '80'
        AppImage: 'atlassian/confluence-server:latest'
        AppPort: '8090'
      TemplateURL: './node_modules/@cfn-modules/fargate-service/module.yml'


nothing to so complex! haha.

In my docker-compose file i use nginx for a reverse proxy (so i could test building it ready for fargate).
In fargate i have th ALB

It now loads but has some sort of conflunce issue, it shows the web page but doesnt have the web form to init the config :confused: on the load screen for some reason…

1 Like

Keep in mind that you store CONFLUENCE_HOME inside the container. If you deploy a new version your data will be gone! There is no support for persistent file system storage on Fargate at the moment! Which likely change in the future

Didn’t know that! thanks for the heads up… thats going to be a problem! Is this restriction also there in EKS fargate / EKS / ECS?

:x: Fargate :cry: (no matter if you use ECS or EKS).
:white_check_mark: Plain ECS
:white_check_mark: Plain EKS