Overview of what i have:
I got a big pre developed framework in shell scripts. About 50k Lines of code in about 5k Files. Not we are moving the whole setup from Unix-Teradata Environment to AWS-Redshift Env. As a result i will have to change a lot of original bteq
commands in the code to some standard insert, select, delete
etc functions. I've identified all the functions that needs to be changed, the script where i need to make the change and the specific line number that needs the change. This is something i am gonna use as a change document.
Approach i am thinking to use
I need to make the changes (about 2k lines of code needs to be modified) and do it in a way so that my code tests itself. For example : I run a query, and then in the code itself i would write a test case that would evaluate the result of the query and store it in a File on unix. When i run the system, i would be able to check the test case document and directly derive inferences from there on which line of code failed and which dint, since the changes are very small and spread accross the whole system that i am not able to derive a perfect unit test strategy, nor does the system integration testing helps as not all the code runs all the time(path testing)
Please help me evaluate my approach vs any other approach that we can think about this problem.
Aucun commentaire:
Enregistrer un commentaire