Deploying Matrix - which directories have changing data?

I am working on a faster way to setup a new Matrix install, and to deploy updates/upgrades.


Can someone let me know which directories contain dynamic data (it has to be kept between upgrades), and those which contain the code (it changes between upgrades).



(AKAIK these are cache and data)



Question 2: if only the data directory exists (and not its children) do Matrix assets create the directories they need on the fly?

cheers,



Richard

The directories which contains dynamic data are "cache" and "data" as you said. The remaining "core", "install", "fudge", "packages", "php_includes" and "scripts" are of the code. You don't want to have an empty "data" directory at a fresh install, and you certainly don't want to have an empty "data" directory when upgrade.
During upgrades table schemas are changed, step2 need to be run to update the DB schemas, and it will also update a/ some db php cache files in the data directory.

For other standard upgrade, only compile_locale and step3 will be run, compile locale might changes contents in the data folder, and so as step3.

For fresh installation, you'd need main.inc, db.inc and licence.inc, so you would need a non empty data dir.

Huan

Hi Richard,

[quote]I am working on a faster way to setup a new Matrix install, and to deploy updates/upgrades.



Can someone let me know which directories contain dynamic data (it has to be kept between upgrades), and those which contain the code (it changes between upgrades).



(AKAIK these are cache and data)[/quote]

There are also a couple of directories which store database queries which will also need to be kept. I have provided a complete list below:


  • cache
    [*]data
    [*]core/lib/DAL/Oven
    [*]core/lib/DAL/QueryStore

[quote]Question 2: if only the data directory exists (and not its children) do Matrix assets create the directories they need on the fly?[/quote]
Most of the "data" structure is created during the installation steps - directories like "public" and "private" etc.
As the database and other supporting configuration is stored within this structure, it is not possible to recreate it for a system in this state without reinstalling.

Edit: Huan's input above (while I was editing) is also appreciated :)

Thanks for the input guys!


Here is what I have so far:



$ cap local:update_upstream_from_tar



This is run with a matrix code tar in the current directory



It sets up a local git repository with two branches:

  • upstream
  • master



    It checks the code into the upstream branch and merges it into master



    It also runs step_01 and commits this code to master



    $ cap deploy:setup



    This command setups the remote code for the first time:


  1. a shared directory for data, cache and DAL

    data contains the same directories as in the tar file
  2. A releases directory for the main code



    $ cap deploy:prepare



    This command grabs the latest code from the master branch and deploys it to a timestamped folder in the releases directory.



    It links in the shared folders data, cache and the DAL stuff



    At this stage any required changes are made on the server for the upgrade



    $ cap deploy:complete



    This links the latest release code to the web root.



    If anything goes wrong you type



    $ cap deploy:rollback



    and it reverts to the old code (which will still work if this is a minor upgrade)





    For the next release you put the new tar file in the project directory and run:



    $ cap local:update_upstream_from_tar



    This switches to the upstream branch, commits the new code to the branch, and switches back to master



    You have to manually merge prior to deployment, and this mean you can diff the two branches of the repo to see the different between the current release and the new code. Quite handy.



    Comment welcome …





    cheers,



    Richard

output of tree:


web root is linked to current

    .
    |-- current -> /home/intranet/webapps/matrix/releases/20090524212408
    |-- releases
    |   |-- 20090524212408
    |   |   |-- cache -> /home/intranet/webapps/matrix/shared/cache
    |   |   |-- config
    
    snip
    
    `-- shared
    	|-- DAL
    	|   |-- Oven
    	|   `-- QueryStore
    	|-- cache
    	`-- data
    		|-- private
    		|   |-- asset_map
    		|   |-- asset_types
    		|   |-- assets
    		|   |-- conf
    		|   |-- db
    		|   |-- events
    		|   |-- logs
    		|   |-- maps
    		|   |   |-- downloaded_patches
    		|   |   `-- installed_patches
    		|   |-- packages
    		|   `-- system
    		|-- public
    		|   |-- asset_types
    		|   |-- assets
    		|   |-- system
    		|   `-- temp
    		`-- temp

[quote]You have to manually merge prior to deployment, and this mean you can diff the two branches of the repo to see the different between the current release and the new code. Quite handy.


Comment welcome …[/quote]

This is quite impressive for code management. If I had more knowledge of git I'm sure I'd like it even more :slight_smile:

The reason for using git is that it is way faster than cvs and there is no need to checkout from CVS to a live system anymore.



Also, Capistrano does not support cvs!



There are a few more tasks to add - one to check dependencies, for example.



I am going to use this to deploy Matrix to an Intranet server here, so I will see how long it takes.



How much time does it typically take to do a clean install on a single server, based off Debian?

[quote]The reason for using git is that it is way faster than cvs and there is no need to checkout from CVS to a live system anymore.
Also, Capistrano does not support cvs![/quote]

As this is pretty much automated you would need to check for errors (either visually or by looking in error.log) and roll back to a database and potentially data dir backup along with the code should there be any problems. I would recommend manual intervention should there be any issues in the upgrade process, as the potential issues vary in severity.


[quote]How much time does it typically take to do a clean install on a single server, based off Debian?[/quote]

Using our dev server which is running Debian I would guess that we could roll out an installation within 10 minutes.

[quote]As this is pretty much automated you would need to check for errors (either visually or by looking in error.log) and roll back to a database and potentially data dir backup along with the code should there be any problems. I would recommend manual intervention should there be any issues in the upgrade process, as the potential issues vary in severity.
Using our dev server which is running Debian I would guess that we could roll out an installation within 10 minutes.[/quote]

The non-invasive, easy to type wrong stuff will be automated. And you



Also, with this system you can push out the code to many servers at the same time. So you could do a test deploy to a staging server first:



cap staging deploy



and when tested:



cap production deploy



Capistrano will do the same commands on many servers, and it checks for errors and rolls back everything if there is one, and tells you what the error is.





R.