mirror of
git://git.yoctoproject.org/layerindex-web.git
synced 2025-07-05 05:04:46 +02:00
docker: add setup script
Adding setup script for docker containers. The script will edit all necessary configuration files, build and launch all containers, and do the initial database setup - including populating the database with data supplied by the user. Changed docker/README to reflect new setup instructions. Signed-off-by: Amber Elliot <amber.n.elliot@intel.com> Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
This commit is contained in:
parent
dde0a82a35
commit
aa07c1b451
3
README
3
README
|
@ -8,7 +8,8 @@ OE-Core.
|
|||
There are two main methods of setting up this application - within
|
||||
a set of Docker containers, or standalone. The Docker-based setup
|
||||
is more suited for production whereas standalone is a bit easier
|
||||
for development.
|
||||
for development. If you simply want to run and use the layer index,
|
||||
please use the docker setup.
|
||||
|
||||
|
||||
Docker Setup
|
||||
|
|
|
@ -1,49 +1,52 @@
|
|||
## Layerindex example docker instructions
|
||||
Layerindex Docker Setup Instructions
|
||||
|
||||
## This is set up to make a cluster of 5 containers:
|
||||
## - layersapp: the application
|
||||
## - layersdb: the database
|
||||
## - layersweb: NGINX web server (as a proxy and for serving static content)
|
||||
## - layerscelery: Celery (for running background jobs)
|
||||
## - layersrabbit: RabbitMQ (required by Celery)
|
||||
The script setup.py will set up and configure a cluster of 5 docker containers:
|
||||
|
||||
## You will need docker and docker-compose installed in order to proceed.
|
||||
- layersapp: the application
|
||||
- layersdb: the database
|
||||
- layersweb: NGINX web server (as a proxy and for serving static content)
|
||||
- layerscelery: Celery (for running background jobs)
|
||||
- layersrabbit: RabbitMQ (required by Celery)
|
||||
|
||||
## First, find and replace layers.openembedded.org in the docker-compose.yml with your hostname
|
||||
## You'll probably also want to replace the database password "testingpw".
|
||||
The script will edit all necessary configuration files, build and launch all containers, and do the initial database setup. It is advised that you start with a .sql database file to prepopulate your database. The following instructions will walk you through the setup.
|
||||
|
||||
## If you want to change any of the application configuration, edit docker/settings.py as desired.
|
||||
## Some settings have been set so that values can be passed in via environment variables, so you can set these from the docker-compose.yml if you want to.
|
||||
## You will definitely need to set SECRET_KEY and probably EMAIL_HOST.
|
||||
1) Install docker and docker-compose per instructions:
|
||||
https://docs.docker.com/compose/install/
|
||||
** Note: for latest docker-compose version follow the directions above, rather than using apt.
|
||||
|
||||
## If you are on a network that requires a proxy to get out to the internet, then you'll need to:
|
||||
## - Uncomment several lines in Dockerfile (search for "proxy")
|
||||
## - Edit docker/.gitconfig and docker/git-proxy
|
||||
2) Clone the repo and checkout appropriate branch:
|
||||
git clone https://github.intel.com/peggleto/layerindex-web.git
|
||||
git checkout origin/dev_snapshot
|
||||
|
||||
## Start the containers:
|
||||
docker-compose up
|
||||
2) Run the setup script (dockerstup.py). You can optionally supply your hostname, proxy settings, a sql database file of layer mappings to import, and a host to container port mapping. For more information, run:
|
||||
./dockersetup.py -h
|
||||
|
||||
Example command to run containers with a proxy and with a database to import:
|
||||
./dockersetup.py -d ~/databasedump.sql -p http://proxy-chain.intel.com:911
|
||||
|
||||
During the setup you will be asked for a username, email and password to set up a super user for the database. This will allow you to access the database later, should you need to.
|
||||
|
||||
3) Once the script completes, open a web browser and navigate to <hostname>:<mapped_port>/layerindex. If you haven't supplied hostname and/or port mapping, this will by default be localhost:8080.
|
||||
|
||||
4) If you have chosen to not supply a prepopulated database and are instead starting fresh, you should now
|
||||
follow the instructions in the "Database Setup" section of the main README.
|
||||
|
||||
5) If you need to rerun this script for any reason a second time, make sure to tear the containers down first with docker-compose down. Otherwise, your new automatically generated root database password will not match.
|
||||
|
||||
6) To update the layers in the future, you can optionally do the following:
|
||||
|
||||
Run the layer updates
|
||||
docker-compose run --rm layersapp /opt/layerindex/layerindex/update.py
|
||||
|
||||
Or do a full refresh
|
||||
docker-compose run --rm layersapp /opt/layerindex/layerindex/update.py -r
|
||||
|
||||
|
||||
## Apply any pending layerindex migrations / initialize the database
|
||||
docker-compose run --rm layersapp /opt/migrate.sh
|
||||
TROUBLESHOOTING:
|
||||
|
||||
## For a fresh database, create an admin account
|
||||
docker-compose run --rm layersapp /opt/layerindex/manage.py createsuperuser
|
||||
- Network issues behind a proxy when building container: On some Ubuntu systems, /etc/resolv.conf is set to 127.0.0.x, rather than your local DNS server. Docker will look there for your DNS server, and when it fails to find it it will default to using a public one (frequently 8.8.8.8). Many corporate proxies blocks public DNS servers, so you will need to manually supply the DNS server to docker using /etc/docker/daemon.json:
|
||||
{"dns": ["xx.xx.xx.xx] }
|
||||
|
||||
|
||||
## Set the volume permissions using debian:stretch since we recently fetched it
|
||||
docker run --rm -v layerindexweb_layersmeta:/opt/workdir debian:stretch chown 500 /opt/workdir
|
||||
docker run --rm -v layerindexweb_layersstatic:/usr/share/nginx/html debian:stretch chown 500 /usr/share/nginx/html
|
||||
|
||||
## Generate static assets. Run this command again to regenerate at any time (when static assets in the code are updated)
|
||||
docker-compose run --rm -e STATIC_ROOT=/usr/share/nginx/html -v layerindexweb_layersstatic:/usr/share/nginx/html layersapp /opt/layerindex/manage.py collectstatic
|
||||
|
||||
## Run the layer updates
|
||||
docker-compose run --rm layersapp /opt/layerindex/layerindex/update.py
|
||||
|
||||
## Or do a full refresh
|
||||
docker-compose run --rm layersapp /opt/layerindex/layerindex/update.py -r
|
||||
|
||||
|
||||
## Once you've finished here, if this is a fresh database, you should now
|
||||
## follow the instructions in the "Database Setup" section of the main README.
|
||||
|
|
166
dockersetup.py
Executable file
166
dockersetup.py
Executable file
|
@ -0,0 +1,166 @@
|
|||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import re
|
||||
import subprocess
|
||||
import time
|
||||
import random
|
||||
|
||||
# This script will make a cluster of 5 containers:
|
||||
|
||||
# - layersapp: the application
|
||||
# - layersdb: the database
|
||||
# - layersweb: NGINX web server (as a proxy and for serving static content)
|
||||
# - layerscelery: Celery (for running background jobs)
|
||||
# - layersrabbit: RabbitMQ (required by Celery)
|
||||
|
||||
# It will build and run these containers and set up the database.
|
||||
# Copyright (C) 2018 Intel Corporation
|
||||
# Author: Amber Elliot <amber.n.elliot@intel.com>
|
||||
#
|
||||
# Licensed under the MIT license, see COPYING.MIT for details
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Script sets up the Layer Index tool with Docker Containers.')
|
||||
parser.add_argument('-o', '--hostname', type=str, help='Hostname of your machine. Defaults to localhost if not set.', required=False, default = "localhost")
|
||||
parser.add_argument('-p', '--http-proxy', type=str, help='http proxy in the format http://<myproxy:port>', required=False)
|
||||
parser.add_argument('-s', '--https-proxy', type=str, help='https proxy in the format http://<myproxy:port>', required=False)
|
||||
parser.add_argument('-d', '--databasefile', type=str, help='Location of your database file to import. Must be a .sql file.', required=False)
|
||||
parser.add_argument('-m', '--portmapping', type=str, help='Port mapping in the format HOST:CONTAINER. Default is set to 8080:80', required=False, default = '8080:80')
|
||||
args = parser.parse_args()
|
||||
port = proxymod = ""
|
||||
try:
|
||||
if args.http_proxy:
|
||||
split = args.http_proxy.split(":")
|
||||
port = split[2]
|
||||
proxymod = split[1].replace("/", "")
|
||||
except IndexError:
|
||||
raise argparse.ArgumentTypeError("http_proxy must be in format http://<myproxy:port>")
|
||||
|
||||
|
||||
if len(args.portmapping.split(":")) != 2:
|
||||
raise argparse.ArgumentTypeError("Port mapping must in the format HOST:CONTAINER. Ex: 8080:80")
|
||||
return args.hostname, args.http_proxy, args.https_proxy, args.databasefile, port, proxymod, args.portmapping
|
||||
|
||||
# Edit http_proxy and https_proxy in Dockerfile
|
||||
def edit_dockerfile(http_proxy, https_proxy):
|
||||
filedata= readfile("Dockerfile")
|
||||
newlines = []
|
||||
lines = filedata.splitlines()
|
||||
for line in lines:
|
||||
if "ENV http_proxy" in line and http_proxy:
|
||||
newlines.append("ENV http_proxy " + http_proxy + "\n")
|
||||
elif "ENV https_proxy" in line and https_proxy:
|
||||
newlines.append("ENV https_proxy " + https_proxy + "\n")
|
||||
else:
|
||||
newlines.append(line + "\n")
|
||||
|
||||
writefile("Dockerfile", ''.join(newlines))
|
||||
|
||||
|
||||
# If using a proxy, add proxy values to git-proxy and uncomment proxy script in .gitconfig
|
||||
def edit_gitproxy(proxymod, port):
|
||||
filedata= readfile("docker/git-proxy")
|
||||
newlines = []
|
||||
lines = filedata.splitlines()
|
||||
for line in lines:
|
||||
if "PROXY=" in line:
|
||||
newlines.append("PROXY=" + proxymod + "\n")
|
||||
elif "PORT=" in line:
|
||||
newlines.append("PORT=" + port + "\n")
|
||||
else:
|
||||
newlines.append(line + "\n")
|
||||
writefile("docker/git-proxy", ''.join(newlines))
|
||||
filedata = readfile("docker/.gitconfig")
|
||||
newdata = filedata.replace("#gitproxy", "gitproxy")
|
||||
writefile("docker/.gitconfig", newdata)
|
||||
|
||||
|
||||
# Add hostname, secret key, db info, and email host in docker-compose.yml
|
||||
def edit_dockercompose(hostname, dbpassword, secretkey, portmapping):
|
||||
filedata= readfile("docker-compose.yml")
|
||||
portflag = False
|
||||
newlines = []
|
||||
lines = filedata.splitlines()
|
||||
for line in lines:
|
||||
if portflag == True :
|
||||
format = line[0:line.find("-")].replace("#", "")
|
||||
print (format)
|
||||
newlines.append(format + '- "' + portmapping + '"' + "\n")
|
||||
portflag = False
|
||||
elif "hostname:" in line:
|
||||
format = line[0:line.find("hostname")].replace("#", "")
|
||||
newlines.append(format +"hostname: " + hostname + "\n")
|
||||
elif "- SECRET_KEY" in line:
|
||||
format = line[0:line.find("- SECRET_KEY")].replace("#", "")
|
||||
newlines.append(format +"- SECRET_KEY=" + secretkey + "\n")
|
||||
elif "- DATABASE_PASSWORD" in line:
|
||||
format = line[0:line.find("- DATABASE_PASSWORD")].replace("#", "")
|
||||
newlines.append(format +"- DATABASE_PASSWORD=" + dbpassword + "\n")
|
||||
elif "- MYSQL_ROOT_PASSWORD" in line:
|
||||
format = line[0:line.find("- MYSQL_ROOT_PASSWORD")].replace("#", "")
|
||||
newlines.append(format +"- MYSQL_ROOT_PASSWORD=" + dbpassword + "\n")
|
||||
elif "ports:" in line:
|
||||
newlines.append(line + "\n")
|
||||
portflag = True
|
||||
else:
|
||||
newlines.append(line + "\n")
|
||||
writefile("docker-compose.yml", ''.join(newlines))
|
||||
|
||||
def generatepasswords(passwordlength):
|
||||
return ''.join([random.SystemRandom().choice('abcdefghijklmnopqrstuvwxyz0123456789!@#%^&*-_=+') for i in range(passwordlength)])
|
||||
|
||||
def readfile(filename):
|
||||
f = open(filename,'r')
|
||||
filedata = f.read()
|
||||
f.close()
|
||||
return filedata
|
||||
|
||||
def writefile(filename, data):
|
||||
f = open(filename,'w')
|
||||
f.write(data)
|
||||
f.close()
|
||||
|
||||
|
||||
# Generate secret key and database password
|
||||
secretkey = generatepasswords(50)
|
||||
dbpassword = generatepasswords(10)
|
||||
|
||||
## Get user arguments and modify config files
|
||||
hostname, http_proxy, https_proxy, dbfile, port, proxymod, portmapping = get_args()
|
||||
|
||||
if http_proxy:
|
||||
edit_gitproxy(proxymod, port)
|
||||
if http_proxy or https_proxy:
|
||||
edit_dockerfile(http_proxy, https_proxy)
|
||||
|
||||
edit_dockercompose(hostname, dbpassword, secretkey, portmapping)
|
||||
|
||||
## Start up containers
|
||||
return_code = subprocess.call("docker-compose up -d", shell=True)
|
||||
|
||||
# Apply any pending layerindex migrations / initialize the database. Database might not be ready yet; have to wait then poll.
|
||||
time.sleep(8)
|
||||
while True:
|
||||
time.sleep(2)
|
||||
return_code = subprocess.call("docker-compose run --rm layersapp /opt/migrate.sh", shell=True)
|
||||
if return_code == 0:
|
||||
break
|
||||
else:
|
||||
print("Database server may not be ready; will try again.")
|
||||
|
||||
# Import the user's supplied data
|
||||
if dbfile:
|
||||
return_code = subprocess.call("docker exec -i layersdb mysql -uroot -p" + dbpassword + " layersdb " + " < " + dbfile, shell=True)
|
||||
|
||||
## For a fresh database, create an admin account
|
||||
print("Creating database superuser. Input user name, email, and password when prompted.")
|
||||
return_code = subprocess.call("docker-compose run --rm layersapp /opt/layerindex/manage.py createsuperuser", shell=True)
|
||||
|
||||
## Set the volume permissions using debian:stretch since we recently fetched it
|
||||
return_code = subprocess.call("docker run --rm -v layerindexweb_layersmeta:/opt/workdir debian:stretch chown 500 /opt/workdir && \
|
||||
docker run --rm -v layerindexweb_layersstatic:/usr/share/nginx/html debian:stretch chown 500 /usr/share/nginx/html", shell=True)
|
||||
|
||||
|
||||
## Generate static assets. Run this command again to regenerate at any time (when static assets in the code are updated)
|
||||
return_code = subprocess.call("docker-compose run --rm -e STATIC_ROOT=/usr/share/nginx/html -v layerindexweb_layersstatic:/usr/share/nginx/html layersapp /opt/layerindex/manage.py collectstatic --noinput", shell = True)
|
Loading…
Reference in New Issue
Block a user