Agile Test Data
The trend in agile testing and test automation over the last decade has been increasingly towards frameworks that allow tests to be specified in a domain specific language (DSL). It was recognised some time ago that traditional automated test approaches have a “representation problem”, in that they are difficult for non-technical stakeholders to review and input in. Forming a common language in defining test scope and processes is critical to the success of a testing strategy.
The DSL trend arrived through the use of Excel based “keyword driven frameworks” as an abstraction layer on top of automated test tools, and later evolved into Agile and Behaviour Driven Development (BDD) approaches which use open source tools to offer even more flexibility in your test automation implementation.
One of the great benefits of this kind of testing, is that it brings forward a lot of the questions testers have about how requirements will be implemented – front loading a lot of the complexity and driving the right conversations up front.
We have been applying a different lens to these tools, and using them to design our test data.
Why is Test Data Important?
Test data is key to proving that complex financial systems are mission-capable; however rarely receives the focus it deserves in the software development process. Often, due to data security controls, it is difficult to get near real production data, and not enough focus is put upon ensuring that any manufactured test data used is representative, and even more importantly traversing all the relevant paths through the software under test.
We believe that complementing the BDD toolset with a specific focus on the test data domain helps define test data jointly between project stakeholders, at the time of system design, and drives the right conversations up front. These definitions can then be used with test data generators to provide early mocks and stubs, as well as large scale test data for e.g. performance testing. But would this work in practice?
The Open Banking Standard Recently we got interested in the Open Banking Standard (OBS), which establishes a framework for the usage and sharing of banking data through open APIs. The Open Banking Working Group (OBWG) report from February 2016 provided a list of proposals and a draft data structure to pave the way for delivering the open banking standard throughout the UK. You can read more about our thoughts on API Banking here.
The Open Banking Standard offered us a chance to demonstrate the use of domain specific languages in an industry relevant setting. After picking up the 143 data items specified in the OBS report, we started to define them in a be-spoke DSL with the aim of using it to pass instructions – called as ‘tests’ – to an in-house test data generator. This DSL is being called ‘Carrot’ for now.
Carrot follows the Given-When-Then style of requirement and test specification (also known as Gherkin) for describing test data.
Carrot tests start by defining the output format and relationships using ‘Given’ statements, then move on to creating actual data using ‘When’. Typically, ‘Then’ is only used when introducing new functionality to the underlying test data generator, to validate the output. In essence, defining a ‘Given-When-Then’ test now gives us a convenient, scalable and repeatable method of generating test data in different formats.
The preliminary technical work, starting on GitHub, for the Open Banking Standard is using JSON structures for the data. With this in mind, we have linked our build server to a web server, so that the latest data we’ve generated is accessible through a JSON endpoint.
The user-friendliness of a “Given-When-Then” style of language, compared to traditional programming or scripting languages, is what allows an easily adaptable means of generating test data even in the case of projects in its infancy, such as the Open banking standard.
Carrot has enabled us to generate large samples of customers, accounts and transactions – data that could potentially be used for other purposes. Watch this video to see it in action.
Once we had understood all the data sets, built all the data generation, and stood it up via a web services, we realised it looked and smelled just like a bank would (subject to agreement and development of the aforementioned APIs, that is!).
This is really useful for Piccadilly Labs – as we can keep using this to try out all the new test tools and techniques which cross our desks!
For more information please contact email@example.com
Adam Smith – Director