Just my opinion but I wouldn't do it that way. It seems like you're trying to answer the question "How do we test the tests?" To write more code to test the tests is a deep rat hole.
I rely on: pair programming, TDD, and code review. Since the tests essentially become your detailed design documentation and examples for usage of your application's APIs, then
you should have
people reviewing the tests for clarity and correctness. Ask these basic questions in the review or while doing pair programming/TDD:
1. Does the test name reveal its intent well? Does it give a good summary of what the test is for?
2. Is the test verifying the right thing? Is the test making the proper assertions
3. Is the test verifying the thing right? Are the inputs and setup reflective of the actual environment that the code being tested will be subjected to?
You should still use static code analysis and test coverage tools to give you a feel for where you have holes or areas of weakness in the code and tests but I would not put any more than a day's worth of effort to customize what these tools already do well.
In the end, your running application will be the ultimate verification of the completeness and correctness of your tests.