One phrase that is always sure to raise the ire of any good honest developer when something breaks is:-
“well, it works on my machine”
This simple statement shows a complete disregard for any other sort of testing that you might need to do to ensure that your feature works correctly and is “done, done” not just almost done. But there is a new kid on the block when it comes to showing how little some people understand about software development that I’m beginning to hear with alarming regularity:-
“well, all the unit tests passed”
It seems that modern development practices have unknowingly created the Silver Bullet that Fred Brooks has always told us never existed! Apparently good unit test coverage and automated refactoring tools means that it’s highly unlikely that any bug would only show up during integration and system testing that those ideas are just old fashioned. Or, if not altogether outdated, then reduced to just a footnote in the product’s testing strategy on the basis that there is so much less value in them than unit testing.
Don’t get me wrong I can understand a genuine mistake caused by a seemingly unrelated change - accidents happen and it could be a fault of the design - but changing the configuration file for a service and then not even bothering to see if it starts up is just laziness. Yes, it does take time and effort to do more extensive testing in your sandbox but the feedback loop could still be fast and you won’t annoy your team mates when you cost them a day’s system testing because of a silly mistake.
The rule of thumb about not checking in code until the unit tests pass was designed to make you think about writing fast tests so that the barriers to testing are as low as possible, it was not expected to be used as a justification for short-circuiting the amount of testing you do.