jeudi 21 novembre 2019

What is an elegant way for calling robot framework tests with automatically generated arguments?

Problem summary: I am currently trying to migrate existing tests from pure python to robot framework in order to benefit from the nice reporting features. These system tests have to be re-run using multiple parameter sets consisting of many parameters. That's why I already have a python generator yielding dictionaries with all possible parameter configurations as well as methods generating readable descriptions for each parameter.

I would like to achieve:

  1. A report where every set of parameters corresponds to one test case, like in data-driven RF style
  2. Readable test cases, without the need to abuse the generators to generate ugly "hard code"

Data driven appproach: I used the generators I have to write a data driven test-file in the following format, which gives me pretty much exactly the output I would like to have. The problem I have with this approach is that my descriptions that I'd like to use as test case names are pretty long and there are far more than three parameters to be messed with, most of them having more than two states. That renders the .robot file I created that way unreadable. The one thing I dislike about the output is that I don't see the names of the parameters used for the test cases, so the test title really must carry all the information about all the parameters. Other than that I think if there is no better solution, this is what I'll go with despite the unreadable intermediate step.

*** Settings ***
Test Template  Check Result With Args

*** Keywords ***
Check Result With Args
    [Arguments]     ${par1}
    ...             ${par2}
    ...             ${par3}
    Set par     par1     ${par1}
    Set par     par2   ${par2}
    Set par     par3    ${par3}
    Evaluation
    Check result

*** Test Cases ***  par1 par2 par3
description000   0   0   0
description001   0   0   1
description010   0   1   0
description011   0   1   1
description100   1   0   0
description101   1   0   1
description110   1   1   0
description111   1   1   1

The code of the following keyword-driven approach is much more to my taste, but the output doesn't look nice. Using the following code, there is just one test-case represented in the report, which then calls one keyword multiple times. Even more disadvantages: The test setup and teardown are both only called once. Also, the describe parameters keyword logs the information about the test case, but it's not visible right away, but hidden in the keyword call. Using the dictionary as parameter seems pretty nice though because that way I get to see the dictionary keys next to the values in the report for each for loop iteration.

*** Keywords ***
Set Parameters And Check Result
    [Arguments]     ${parameter dictionary}
    Describe Parameters      ${parameter dictionary}
    Set Parameters      ${parameter dictionary}
    Evaluation
    Check result

*** Test Cases ***
Check Result For All Possible Configurations
    ${all configs} =   Yield All Possible Configurations
    :FOR    ${configuration}      IN  @{all configs}
    \   Set Parameters And Check Result   ${configuration}

I also tried using [Template] within a test case, but that solution just combines the bad parts of both approaches I'm showing here. One thing I haven't tried yet is giving the whole test case arguments and running it from python. I feel like there should be an elegant solution somehow using a test template in combination with a for loop and maybe embeded arguments, but I could not figure it out yet. Thanks for your help in advance!

Aucun commentaire:

Enregistrer un commentaire