mardi 26 novembre 2019

Designing a unit-test framework for writing custom tests in CLIPS for CLIPS rules, using a multi-file setup

I'd like to make a unit-test like framework that allows me to write custom tests for individual rules. I'd like each test to be in it's own file, i.e. test_R1.clp would be the test file for rule R1. Each test should be able to load it's own facts file. I've tried many variations of the following, including using a different defmodule for each file. Is what I'm trying to do even possible in CLIPS? If so, what else is needed to make this work?

I'd like to run my tests via:

$CLIPSDOS64.exe -f2 .\test_all.clp

With the current example, the error I get is [EXPRNPSR3] Missing function declaration for setup-tests.

I've gotten a single test to work correctly using a unique defmodule for each file (i.e. UNITTEST for the testing framework and R1 for the test_R1 file). However, I would still get errors because of the automatic switching between focus statements when files are loaded, or when functions are defined in other files. I've looked at the basic and advanced CLIPS programming guides, but if I've missed something there, please let me know.

Other specific questions:

  1. Since some tests may load facts that overwrite existing facts, how do I prevent getting errors from redefining existing facts? Do I need to do a (clear) in between running each test?

TestingFramework.clp:

;;; File: TestingFramework.clp

(defglobal ?*tests-counter* = 0)
(defglobal ?*all-tests-passed* = TRUE)
(defglobal ?*failed-tests-counter* = 0)

(deftemplate test_to_run
   (slot testid)
   (slot testname)
   (slot testsetupfunc)
   (slot testcheckfunc))

(deffunction test-check (?test-name ?test-condition)
   (if (eval ?test-condition)
       then (printout t "SUCCESS: Test " ?test-name crlf)
            (printout test_results_file "SUCCESS: Test " ?test-name crlf)
            (return TRUE)
       else (printout t "FAILURE: Test " ?test-name crlf)
            (printout test_results_file "FAILURE: Test " ?test-name crlf)
            (return FALSE)))

(deffunction setup_tests ()
    (open "test_summary_results.txt" test_results_file "w"))

(deffunction finish_tests ()
    (close test_results_file))

(deffunction add_test (?test-name ?test-setup-func ?test-check-func)
    (bind ?*tests-counter* (+ 1 ?*tests-counter*))
    (assert (test_to_run (testid ?*tests-counter*)
                         (testname ?test-name)
                         (testsetupfunc ?test-setup-func)
                         (testcheckfunc ?test-check-func))))

(deffunction run_all_tests ()
    (printout t "About to run " ?*tests-counter* " test(s):" crlf)
    (do-for-all-facts ((?ttr_fact test_to_run)) TRUE
        (funcall (fact-slot-value ?ttr_fact testsetupfunc))
        (if (funcall (fact-slot-value ?ttr_fact testcheckfunc))
            then (printout t "    SUCCESS" crlf)
            else (printout t "    FAILURE" crlf)
                 (bind ?*failed-tests-counter* (+ 1 ?*failed-tests-counter*))
                 (bind ?*all-tests-passed* FALSE)))
    (if ?*all-tests-passed*
        then (printout t "All " ?*tests-counter* " tests passed successfully." crlf)
        else (printout t ?*failed-tests-counter* "/" ?*tests-counter* " tests failed." crlf)))

tests\test_R1.clp:

;;; File: test_R1.clp
;;; Tests for Rule 1

(deffunction R1_TEST_1_SETUP ()
    (load* "FluidSystem_facts_demo.clp")
    (load* "FluidSystem_rules_demo.clp")
    (reset))

(deffunction R1_TEST_1 ()
    (send [JacketWaterInletTempReading] put-hasValue 35.0)
    (send [JacketWaterInletTempReading] put-hasValueDefined DEFINED)
    (send [JacketWaterOutletTempReading] put-hasValue 37.0)
    (send [JacketWaterOutletTempReading] put-hasValueDefined DEFINED)
    (run)
    (return (member$ [DissimilarHighTempFlowRate] (send [CounterFlowHeatExchanger] get-hasIssue))))

test_all.clp:

;;; File: test_all.clp
;;; Run tests via:
;;; CLIPSDOS64.exe -f2 .\test_all.clp

(load* "TestingFramework.clp")
(setup-tests)

;;; Test R1
(load* "tests\\test_R1.clp")
(add_test (test_to_run "R1_TEST_1" R1_TEST_1_SETUP R1_TEST_1))
(clear)  ;; unsure if this is needed

;;; ... more tests to follow

(run_all_tests)

(finish_tests)

Aucun commentaire:

Enregistrer un commentaire