我们已经想了很多关于测试/ past year. We ran into some difficulties with our last new model in that the testing methods we had established were overly cumbersome in some ways that didn’t add a lot of value as far as mitigating model risk. We spent some time putting together a framework that is meant to provide our model builder with guidance on selecting appropriate testing methods. We are trying to emphasize that various methods are acceptable and professional judgment should be employed to ensure that testing is completed which will help us to appropriately mitigate model risk. Testing is always required and model builders must document their testing, but we try not to dictate that one specific method must be used in every case. The idea is that security updates are very different from small dashboard tweaks which are very different from creating new journal entry processes, so the testing should be different as well. We view it as the model builder should be taking some steps to feel comfortable that the build is correct, so they should be able to put together a test script that documents their process to get to a level of comfort and another model builder ought to be able to look at the test script and feel comfortable with it as well. For live models, we also require a change log to be maintained. This is an Excel file where the model builder must list out all of the changes that were made. They should specify the item that was changed, provide a brief description, include the old/new formulas (if applicable), and answer a few key questions that are designed to help them think through downstream impacts of the changes (regarding security updates, impacts to existing data, & new maintenance requirements). Once an item is complete, the model builder would also provide a link to the testing documentation and then the change log and testing documentation would be reviewed by another model builder prior to promoting changes to production. We are also trying to tie testing directly to user stories. We strongly encourage/require that every user story be tested and testing documentation be prepared. Again, allowing the model builder to use professional judgment as far as what testing is necessary. If we find that we need to go and edit a previously-built item, then that might become a new user story created during the sprint. Or we might include certain re-testing elements as a part of a future test script. On top of testing each user story/change log item, we also encourage the model builder team to consider what other tests would be appropriate for the development being done – that may include parallel testing, regression testing, usability testing, and/or security testing and is very dependent on the specific development being done. Our COE team stays very involved in testing at this point. I would say that about 50-75% of all model changes are either done by or reviewed by the COE. We are trying to reduce that number and put it more in the hands of the model builders, but we do want to make sure that sufficient and thorough review is done, especially for changes that are more high-risk and many of our model builders are still fairly new to the tool. We also primarily manage the change logs and ensure changes are appropriate documented, reviews are lined up, and summaries of changes are provided to the model owners to request model promotion approvals. One thing we are still trying to fully solve is how to get to a good level of comfort that all changes to existing models are included in the change log (and thus tested). I would love to hear others’ thoughts on this. Do other admins review the ALM comparison reports back to the list of expected changes or do regular model compares to check this? How detailed are your checks? Does anyone else do something like a change log where all the model changes are documented/reviewed?
。。。View more