I’m new to test automation. I’m writing selenium/protractor tests in C# within the project solution, which allows developers to run all of my UI tests along side their own Unit tests.
The project is all very new, and big chunks aren’t built yet. I’m trying to grow my tests along with the project as each function is fleshed out.
I’m struggling with test data! The BAs have had a tool built for them which allows them to create series of test data in XML and have it all imported. This seems a bit cumbersome for my uses and I’d prefer to seed in my test data programmatically. I have figured out mostly how to use the data layer of our application to get stuff in there, but it’s very quickly getting out of hand with the amount of test data being created, it’s very hard to manage.
Should each test case seed it’s own test data as part of the test run? This would have the benefit of if requirements change, the test will fail, I can go directly to it and amend the test data to match the new requirements.
Or, should test data be separated out in a central location?
I answered a similar question to this yesterday, so it might help to read that first.
It’s great to hear you’re writing tests alongside the application code: I have found this to lead to better collaboration and increased usefulness and adoptability of automated testing.
As per that other post, I find a combination of seeding test data in a central location that is generic enough to be used across many different tests, and programmatically creating/destroying data in test hooks (via scripts or APIs) works quite well. I avoid as much as possible having to manually create data as this isn’t easily repeatable.