- 论坛徽章:
- 32
|
Now, let’s dive into the details!
We maintain all our demo workspaces, tutorials and help texts in our SaaS-production environment – this makes it easier to keep everything up-to-date. When running offline demos or setting up on-site solutions at customers, we need a way to copy data from the production environment to seed theses installations.
Prior to using Docker this was an error prone and ad-hoc activity. After moving to Docker we are now able to do this as part of every build. The process of preparing and applying a data seed includes the following steps:
Copy production data and clean it
Package and distribute the seed data
Import the data
Step 1 – Cleaning the database dump
To filter out other environment specific meta-data, we need to clean up the database dump before it is distributed. The core of the clean up is a throw away MongoDB Docker container. The reason we have the separate step with the throw away container is a security precaution – we don’t want to expose filtered out data in a lower Docker file system layer.
We start the container with the database dump available in a mounted volume, fire up MongoDB, run a clean up script in MongoDB, and finally dump the cleaned database to the mounted volume. The Docker container is discarded, so the only side effect of this process is the cleaned database dump.
|
|