Speed up your Oracle Database provisioning with Docker and Ansible
Warming up before AMIS 25th Conference event where I will be presenting with my friend and colleague Arturo Viveros (@gugalnikov) about Oracle SOA Suite provisioning, I want to share some practices that help us to provide Oracle Database instances between developers and improve our productivity.
Since I started working with Oracle technologies, almost 7 years ago, provide Oracle Database instances has been always a not so easy process. It demands configuring the operating system with the right packages and kernel params, then prepare user and groups, and after that install the Database engine. Once you have your engine installed, you can start creating database instances.
Now that you have a running Database instance, you are able to start your database development: creating schemas, running SQL scripts, and connect your applications to store data.
So, should I repeat this process on each developer workstation?
There are several options to improve this process: share one database between all developers, creating and share a VM image between developers, or actually use an automation tool like Vagrant and Packer to create images, or just provide the installation automated a tool like Puppet/Chef.
I will share how we are doing it now in a current project using Docker as Container platform, and Ansible as our automation tool.
Let’s get started…
Defining the process
After several months working with automation tools, I understand a key principle about automate provisioning: “divide and conquer”.
Because provision some type of systems is not always a simple process as: download, unzip and run, in some cases you need to define your provisioning process in steps, so you can define this steps as checkpoints, and don’t get bored re-running process from scratch. I mean: from configuring your OS for running Oracle Database to actually run your SQL scripts is a long way right? Even if you automate it or do it manually.
In this case, as I explained before, we have 3 major steps:
- Prepare your OS and install database software
- Create database instance
- Run scripts
And each step will become a checkpoint, i.e. a Docker image.
I won’t go into every detail about installing Oracle Database, because there are plenty information about this. At Sysco, we have developed several Ansible roles to automate installation and configuration of Oracle software: http://github.com/sysco-middleware. Therefore, I will cover only how do we separate this process into steps using Docker and Ansible.
If you need to check more about how Docker integrates with Ansible, take a lot to my previous post: http://jeqo.github.io/blog/devops/ansible-agentless-provisioning/
Step 1: Install Oracle Database
Go to the first repository called: docker-image-oracle-database http://github.com/sysco-middleware/docker-image-oracle-database
Here is how we create an image with Oracle Database 11g (or 12c) installed using Ansible:
We have 3 main steps here:
First we create the containers using a variable that contains the base image: “syscomiddleware/oraclelinux:6.7” that is based in Oracle Linux official image, with some packages installed. And then we add the running docker container as an Ansible host.
Second, we connect to the running docker container, and run our Oracle Database role (https://github.com/sysco-middleware/ansible-role-oracle-database)
Finally, after installing Oracle database software, I create a checkpoint: commit container as an image, kill and remove running container, and optionally (and if you have a private repository) push your image. (check this issue about sharing Oracle software inside container images: https://github.com/oracle/docker-images/issues/97)
To run this process you just need:
To check that our “oracle-database” image is created successfully, just run
The main goal here is that we have a checkpoint that represents a container with Database engine installed. This is a unique image that can be reused to move forward.
Step 2: Create database instance
Once we have the Oracle Database image, we won’t have to reinstall it again! :) …as far as we keep using Docker and at least until we find out how to improve the installation process.
So, we can move forward from this point up to the next stage: create a database instance.
Let’s go and checkout this repository: https://github.com/sysco-middleware/docker-image-oracle-database-instance
Here we follow the same approach as previous step: prepare temporal containers, run Ansible roles and commit image.
You can check the main.yml file here: https://github.com/sysco-middleware/docker-image-oracle-database-instance/blob/master/main.yml
And as you see, we are using another Ansible role called “oracle-database-instance” that is used to create an instance and prepare listener:
After running this Ansible playbook, we will have a new image called: oracle-database-instance tagged with its corresponding OS and version.
But this process is a little bit different from previous case: As you can see in this main.yml file, there is an additional step:
This is an important step, because it involves the usage of Dockerfile to prepare a Docker image.
To give a small background about this: Docker is prepared to isolate process and files, and by convention you should run only 1 process by container. To define this process, you will use a Dockerfile to specify which command should be run, and this process should persist over time, because if it ends, your container will be stopped.
In our case, we need to start our database instance. And to do this we will use a Dockerfile. But, as you know, we don’t have a out-of-the-box script that starts the instance and keep this process alive and printing logging messages. But we can create something similar:
Thanks to GitHub user “wnameless” that share how to do this in its Docke Image for Oracle XE: https://github.com/wnameless/docker-oracle-xe-11g
Here, we not only start and read a log file, but update our listener file. Why? Because, also by convention, every time a container starts, container’s hostname gets updated. So, to keep our database instance consistent, we have to update our listener.ora accordingly.
Here is our Dockerfile:
As you can see, I start the process and tail the startup log file to keep our container running.
And each time we want to run this image, this process will be executed, unless you override it in your “docker run” command.
You can test this image by running:
From other terminal you can check container name and inspect for its IP address:
Let’s take a minute to understand the process: We will have images by steps from our provisioning process, and each step can be tagged by version and OS (and any other relevant information). This will create a group of images that will be reusable, and if we have issues, we can identify and solve specific tasks, instead of re-run everything from scratch.
As learning any other technology, this will take some time at the beginning of the process, but as we understand and collaborate to improve this images, it will worth our effort.
First, you can see that a container name is assigned randomly: “adoring_yalow”
And test to connect from your favorite IDE using a JDBC URL like:
Step 3: Run SQL scripts
This is a more “custom” step as you can use now use this images for different purposes (e.g: create a schema, run SQL scripts, add more configurations, or if you are working with Fusion Middleware products, you can create RCU schemas).
Depending on your use-case it will be easier, or more efective, to use Dockerfile than Ansible playbooks.
In this case, I will show you how to create an schema to start your Java application development, for instance.
I will recommend you to use Docker Compose (https://docs.docker.com/compose/) just to simplify the execution of Docker command, and link containers together.
I have created a repository (https://github.com/jeqo/post-oracle-database-docker) to host this sample.
You can check that there is a directory called “sample” that will be assumed as the project name by Docker Compose.
Inside is a file called “docker-compose.yml” that defines container services:
In this case, it defines a service (a container), that will be built from a Dockerfile inside a “db” directory, and will forward its 1521 port to your hosts port 10521, so you can use it from a local application.
This Dockerfile contains instructions to create an schema called “test”.
Then, just execute “docker-compose up -d” and this container will be built and started.
To check its execution, run “docker-compose logs -f” and that’s it.
You can customize your Docker Compose file, or just start building more layers on top of the database instance image.
At AMIS25, we will show how to use this instance to build a SOA Suite database and then provide customs SOA Suite Domains, but also show different experiences with different “DevOps” technologies. Hope to see you there!
Originally posted in jeqo’s blog