I was at a meet up recently when someone asked the presenter about how to manage dependencies between tests. The presenter gave a list of tools that allow test execution ordering so you can ensure tests are executed in a specific order to satisfy dependencies, and how to pass data around using external sources.
But I don’t think this is a good idea at all.
I believe the best way to manage dependencies between automated tests is to not have automated tests dependent on each other at all.
I have found avoiding something is often better than trying to manage something. Need a storage management solution for your clutter? Avoid clutter. Need a way to manage dependencies between tests? Write independent tests.
As soon as you begin having tests that require other tests to have passed, you create a complex test spiderweb which is makes it hard to work out the true status of any particular test. Not only does it make tests harder to write and debug, it also makes it difficult it not impossible to run these tests in parallel.
Not having inter-test dependencies doesn’t mean having not having any dependencies. Targeted acceptance tests will still often rely on things like test data (or create it quickly via scripts in test pre-conditions), but this should be minimised as much as possible. The small number of true end-to-end tests that you have should avoid any dependencies almost completely.