Test Double is a generic term for any case where you replace a production object for testing purposes. There are various kinds of double that Gerard lists:
A TDD cycle is composed of three, iterative phases.
from a Pivotal Labs developer (Jasmine) he links searches papers by IBM and Microsoft They concludes : Reduced defect density at IBM 40% and Microsoft 60% – 90% Increase of time taken to code feature (15% – 35%) TDD saves time & money
This document demonstrates that the studied group of TDD practitioners spent only 16% more time on the overall development process
Following Ian Cooper famous talk "TDD: Where did it all go wrong?".
Writing a test class for an impl class implies :
- over-isolation (mocking everything)
- testing impl instead of behaviour
The red-green-refactor cycle have to be precised :
- Red: Write a unit test; the unit test fails.
- Green: Write production code; the unit test passes. Returning a hardcoded value is too simplistic. Just write code without design, patters nor structure.
- Refactor: Refactor the code ; the unit test passes. This is where to add design.
When you do this right, you end up with several classes that are all tested by a single test-class. This is how things should be.
The big mistake comes when you get carried away with isolation. If you were to undertake too much design up-front, you could end up with one test-class per class in your program.
The term "isolation" means that you don’t cross a port. This means not relying on network, database, file system, service or anything else that you might add an adaptor for in a Hexagonal Architecture.
Integration testing is what you’re doing if your tests cross a port.
My view on what constituted a "unit" was pretty much a "class file" before I realised that this makes tests too reliant on the implementation.
This is the cause of over-isolation.
The recommendation here is that you use these low-level tests for discovery if you need them – but you don’t check them in to source control. Ian described these as "driving in first gear", which you sometimes need to do – but you want to be going faster than this.
New glossary :
Term Before After Unit A class file A collection of classes, but nothing that crosses a port Integration A test that relies on something else being there A test that crosses over a port Isolation Replacing every dependency a class has with a test-double Replacing dependencies at the port
Unit tests are written only after some production code (e.g., the code necessary for a feature of the task) was present.
Le behavior-driven development (ou BDD) est une méthode agile qui encourage la collaboration entre les développeurs, les responsables qualités, les intervenants non-techniques et les entreprises participant à un projet de logiciel. Il a été conçu en 2003 par Dan North comme une réponse au Test Driven Development.
A template to capture a story’s acceptance criteria :
As a [X] where Y is some feature
I want [Y] where Z is the benefit or value of the feature
so that [Z] where X is the person (or role) who will benefit
Acceptance criteria in terms of scenarios :
Given some initial context (the givens),
When an event occurs,
then ensure some outcomes.
The story card
+Title: Customer withdraws cash+
As a customer,
I want to withdraw cash from an ATM,
so that I don’t have to wait in line at the bank.
So how do we know when we have delivered this story? There are several scenarios to consider
+Scenario 1: Account is in credit+
Given the account is in credit
And the card is valid
And the dispenser contains cash
When the customer requests cash
Then ensure the account is debited
And ensure cash is dispensed
And ensure the card is returned
+Scenario 2: Account is overdrawn past the overdraft limit+
Given the account is overdrawn
And the card is valid
When the customer requests cash
Then ensure a rejection message is displayed
And ensure cash is not dispensed
And ensure the card is returned
- What are you testing?
- What should it do?
- What is the actual output?
- What is the expected output?
- How can the test be reproduced?
- TDD is too Time Consuming. The Business Team Would Never Approve
- You Can’t Write Tests Until You Know the Design, & You Can’t Know the Design Until You Implement the Code
- You Have to Write All Tests Before You Start the Code
- Red, Green, and ALWAYS Refactor?
- Everything Needs Unit Tests
- It’s common for initial project build-outs to take up to 30% longer with TDD (src)
- TDD reduces production bug density 40% — 80% (src)
- fixing a production bug costs 100x more than fixing a bug at design time, and over 15x more than fixing a bug at implementation time (src)
- Code reviews have similar effects. According to a 1988 study, each hour spent in code review saves 33 hours in maintenance (src).
- the cost of fixing bugs that get released in production isn’t just about the cost of fixing the production bug. Interruptions increase the cost of current development work, and introduce more bugs that will eventually need fixing, too
- Unit tests ensure that individual components of the app work as expected. Assertions test the component API.
- Integration tests ensure that component collaborations work as expected. Assertions may test component API, UI, or side-effects (such as database I/O, logging, etc…)
- Functional tests ensure that the app works as expected from the user’s perspective. Assertions primarily test the user interface.
- Unit tests
- Integration tests
- Functional tests
- End-to-end tests
- Acceptance testing
- Performance testing
- Smoke testing
Since Kent Beck wrote the book on TDD in 2002 a lot of words have been dedicated to the subject.
But many of them propagated misunderstandings of Kent's original rules so that TDD practice bears little resemblance to Kent's original ideas.
Key misunderstandings around what do I test, what is a unit test, and what is the 'public interface' have led to test suites that are brittle, hard to read, and do not support easy refactoring.
In this talk, we re-discover Kent's original proposition, discover where key misunderstandings occurred and look at a better approach to TDD that supports software development instead of impeding it. Be prepared from some sacred cows to be slaughtered and fewer but better tests to be written.
This post discusses the talk "TDD, where did it all go wrong" by Ian Cooper, which was given in June 2013.
- you should write unit tests for every method and class that you introduce in an application
- but this will necessarily result in you baking implementation details into your tests
- causing them to be fragile when refactoring, contain a lot of mocking,
- result in a high proportion of test code to implementation code
- and ultimately slowing your TTM
Testing behaviours rather than implementations
Ian suggests that the trigger for adding a new test to the system should be adding a new behaviour rather than adding a method or class.
your tests can focus on expressing and verifying behaviours that users care about rather than implementation details
TDD and refactoring
Ian suggests that the original TDD Flow outlined by Kent Beck has been lost in translation by most people.
Red. Green. Refactor.
Red. You write a test that represents the behaviour that is needed from the system.
Green. You write minimal code to make the test green.
Refactor. This is the only time you should add design.
When you do this right, you end up with several classes that are all tested by a single test-class. This is how things should be. The tests document the requirements of the system with minimal knowledge of the implementation. The implementation could be One Massive Function or it could be a bunch of classes.
Ian points out that you cannot refactor if you have implementation details in your tests because by definition, refactoring is where you change implementation details and not the public interface or the tests.
Ports and adapters
Ian suggests that one way to test behaviours rather than implementation details is to use a ports and adapters architecture and test via the ports.
There is another video where he provides some more concrete examples of what he means.
One side effect of having unit tests for every method/class is that you are then trying to mock out every collaborator of every object and that necessarily means that you are trying to mock implementation details.
Using mocks of implementation details significantly increases the fragility of tests reducing their effectiveness.
Mocks still have their place (systems I/O)
Problems with higher level unit tests
- Complex implementation One of the questions that was raised and answered in Ian’s presentation was about what to do when the code you are implementing to make a high-level unit test test pass is really complex and you find yourself needing more guidance.
- Combinatorial explosion I’ve covered this comprehensively in the previous article. This can be a serious problem, but as per the previous section in those instances just write the lower-level tests.
- Complex tests The other point that Ian raised is that you are interacting with more objects this might mean there is more you need to set up in your tests, which then make the arrange section of the tests harder to understand and maintain and reducing the advantage of writing the tests in the first place.
- Multiple test failures It’s definitely possible that you can cause multiple tests to fail by changing one thing.
- Shared code gets tested twice that’s fine because that shared code is an implementation detail
This post discusses the talk "Integrated Tests Are A Scam" by J.B. Rainsberger, which was given in November 2013.
Some fallacies about unit testing
- TDD is all about unit tests
- Automated testing is all about unit tests
- 100% code coverage requires extensive unit testing
- You have to make private methods public to reach 100% coverage
- Some code does not need be tested
- You need to use a mocking framework
- Tests are expensive to write
- The ‘testing pyramid’ is the ultimate testing strategy
What about some truths ?
- Unit tests are not about testing a method in isolation
- 100% coverage does not mean your code is bug free
- There is a tooling problem
- It is difficult
- Tests require maintenance
- Having too many tests is a problem
- Throwing away tests is a hygienic move
- Automated tests are useful
Suppose you have class called
ledgera method called
calculatethat uses a
Calculatorto do different types of calculations depending on the arguments passed to
calculate, for example
Now, suppose you want to test what happens when you call
ledger.calculate("5 * 7").
The London/Interaction school would have you assert whether
Calculator.multiply(5,7)got called. The various mocking frameworks are useful for this, and it can be very useful if, for example, you don't have ownership of the
Calculatorobject (suppose it is an external component or service that you cannot test directly, but you do know you have to call in a particular way).
The Chicago/State school would have you assert whether the result is 35. The jUnit/nUnit frameworks are generally geared towards doing this.
Both are valid and important tests.