I
In past several weeks I have been busy with improving the OPY tool. It started as a simple Oracle Python wrapper, but I have added more and more functionality to it and now it has almost 3000 lines, nifty YAML config file (per project now) and do way more than just execute queries against Oracle database.
You can read about:
- basic versioning of database objects and APEX applications in Git
- version data as CSV files and convert them to MERGE statements into patch files
And now I have added greater support for feature branches (and improved many other things).
What is this about?
Feature branches style is typical on larger projects with larger teams. You are assigned a card (or you pick one from a queue, depending on how agile you are), you create a branch just for this card, you work on a card (change objects in database, change APEX pages and components), you test it and commit the result to this branch. Then depending on your process push/merge it to source branch or create a pull request from the source branch and you move to another card. Cycle repeats until you burn down.
I very dislike the manual approach when you change something in database and export objects or changed APEX pages manually. Even worst when you have to create the patch script manually. Many times I have forgotten something. Hence I created this tool, so it will do this for me. But on big project I don't want to export all objects every time I want to commit something, because it can take up to several minutes. Since I like to commit often (not just when the card is fully done), I need the process to be quick. So I need to export just the objects I am working on. When it is just a card when you are changing/fixing something quickly, you are good with the -recent mode. Pick number of days into the past and it will export just the objects compiled since then. Most of the time this is good enough.
# export objects changed today python export.py -n PROJECT -r=1
But sometimes I work on several objects for weeks (lets call it a small app - project) in a schema shared with other people and unfortunately other projects. So I don't want to be distracted by objects changed by others. I developed a -lock mode for this. Basically you export the whole schema, then delete files you are not interested in and then lock what is left. From this point every time you export this schema you will just export objects on this list. Of course you can add new files to the list (it is just a locked.log file) and you can also remove the lock (by removing the locked.log file). You can also use -add flag to add newly created objects to the list.
# export whole schema python export.py -n PROJECT # delete unrelated files # lock what is left python export.py -n PROJECT -lock # refresh locked objects whenever you like python export.py -n PROJECT # add new objects created today to the list python export.py -n PROJECT -lock -add
I also store this locked.log file in the branch, so when other developers come, they will be on the same page and this way it is also clear which objects are relevant to the project.
Improved patching
This tool now supports multiple environments so you can track changes against UAT, PROD and whatever env you need in dedicated files. You can also put card name after the env name and it will create named (dedicated) patch files. You can adjust many things in the YAML config file.
# show differences since the last release to UAT # and create a patch file as patches_done/UAT/YYYY-MM-DD.sql python export.py -n PROJECT -patch UAT # show differences since the last release to UAT # and create a patch file as patches_done/UAT/CARD_NUMBER.sql python export.py -n PROJECT -patch UAT CARD_NUMBER # show differences since the last release to PROD # a patch file as patches_done/PROD/YYYY-MM-DD.sql python export.py -n PROJECT -patch PROD
The new patch file consists of two parts. A map which will show you which objects changed in an installable order (based on object dependencies and it will even show you related objects in -verbose mode). And below that it will list changed files as references (@@filename) also in a proper order and including files from the patch/ folder.
# map objects to patch folders patch_map: init: [] tables: - SEQUENCE - TABLE objects: - INDEX - MATERIALIZED VIEW - VIEW - TRIGGER - PROCEDURE - FUNCTION - PACKAGE - PACKAGE BODY - SYNONYM data: - DATA grants: - GRANT jobs: - JOB # patch folders, sorted patch_folders: init : '#ROOT#/patches/10_init/' tables : '#ROOT#/patches/20_new_tables/' changes : '#ROOT#/patches/30_table+data_changes/' objects : '#ROOT#/patches/40_repeatable_objects/' cleanup : '#ROOT#/patches/50_cleanup/' data : '#ROOT#/patches/60_data/' grants : '#ROOT#/patches/70_grants/' jobs : '#ROOT#/patches/80_jobs/' finally : '#ROOT#/patches/90_finally/'
Thats where this part of the setup come into play. So it add files from patches/10_init/ folder first, then add files from patches/20_new_tables/, then all SEQUENCE and TABLE objects. Basically it add files from specific patches/ folder, then add related objects (based on patch_map list).
Comments
Post a Comment