Writing Tests that Fail Well

Writing better tests is a lifelong pursuit, apparently.

One thing I've been noticing lately, especially while doing code reviews, is that we are usually thinking about how do I know the code worked.

Great! But we are usually not thinking ahead about what am I going to do when the test fails.

All too often, the answer is stare at a cryptic message that tells me nothing useful and will require me to make changes and/or debug before I can learn anything at all.

Can we do better? Sure!

I tossed together a lightning talk on this subject for PyGotham 2018 and here it is: "Failing Better"

Unfortunately I (and the person after me) got bumped from the schedule when the previous talks ran into technical issues (projectors == ugh, still!) I might try again to deliver this at some point. It could use some better examples, so it's still a work in progress.