Effective MUnit Testing with MuleSoft
Having trouble determining exactly how to test your MuleSoft application with MUnit? Look no further!
There's always this intimidating feeling when you're staring at a blank test canvas in Anypoint Studio and asking yourself, "what should I test?" Unlike developing data flows and other integrations, the requirements are not laid out beforehand by an analyst. Instead, it's your job as the developer to figure out what to test and how. While this sounds like a daunting task, there is a simple method you can use to help you build effective MUnit tests.
This post will cover what unit tests are, why we write unit tests, and how we can write effective MUnit tests. It goes hand-in-hand with a virtual presentation I did for the Denver MuleSoft Meetups group. The code examples for this topic are too verbose, and they typically span multiple files. This tends to distract from the overall message so I've removed them from this post. I recommend you first read this post, then follow along with the presentation here which contains code examples - https://meetups.mulesoft.com/events/details/mulesoft-denver-presents-5-easy-steps-to-effective-unit-testing-in-mulesoft/ If you're reading along and thinking "wow great point, but how do I accomplish this with MUnit," chances are that is covered in the presentation.
What is a Unit Test?
Unit tests are test written by developers, for developers. I love this list from artofunittesting.com which details many aspects which make a unit test a unit test. My additions are in [italics]:
- Able to be fully automated [you can run all your tests with a single click or command execution, we get this for free with MUnit and Maven]
- Has full control over all the pieces running (Use mocks or stubs to achieve this isolation when needed) [test makes no attempt to access devices over the network that it does not own]
- Can be run in any order if part of many other tests [e.g. "test a" is not dependent on the results of "test b" to run]
- Runs in memory (no DB or File access, for example) [test makes no attempt to access any system resources outside of the application]
- Consistently returns the same result
- Runs fast
- Tests a single logical concept in the system
- Readable
- Maintainable [like all the code you write! :)]
- Trustworthy (when you see its result, you don’t need to debug the code just to be sure) [never trust a test you wrote unless you've seen it fail]
When it's all said and done, unit tests are tests that help a developer ensure the code they wrote works the way they expects it to.
Why Write Unit Tests?
We write unit tests because they help us guarantee that our code aligns with our expectations of how it should work. It's simply not enough to write source code and think it works. Why not? Because manual testing scenarios (e.g., "I ran a Postman request and got back a 200") cannot be automatically run by other developers, and chances are those scenarios were never well-documented in the first place. Besides, those are integration tests not unit tests.
An effective set of unit tests can offer developers and the business the following:
- Guaranteed up to date documentation of how components of an application are expected to work in most common scenarios
- Greater level of confidence that changes to an application do not impact the funtionality in other areas of the same application
- Better internal application design
- Reduced maintainence costs
- Less dependence on the original developer
- Fewer bugs
While this all sounds great, it's not like unit tests are magic; you don't get any of the above benefits from just writing tests. You must write valuable tests. Just like code, tests costs time and money to create. Also just like code, we can write tests that cost more resources than they're worth. We want to find a balance between the amount of time we're spending to create the unit tests and the amount of value we get back from them.
How to Write Effective Unit Tests
The big problem I've found with Mule teams and MUnit is developers simply don't know how to write valuable test cases. They can write excellent code and meet requirements, but writing tests is a mystery. While this was surprising to me at first, the reason why this is so became clear, and it's very simple: writing code and writing tests are two different skill sets. In this section we will lay down some rules to make sure you have the skills necessary to write great MUnit tests for your team's MuleSoft applications.
1) Understand the Value of Unit Tests
If you don't understand why you're writing MUnit tests, there's no way you can write a good test. Reread "Why Write Unit Tests?" or post your clarifying questions below if you have to, but make sure you understand this first!
2) Assert Code Meets Interface Contracts
When I say "interface" in MuleSoft circles most developers think of the API interface defined by an application's RAML contract. This is just one (very important) interface. Any external dependency your application interacts with also has an interface. For example, if you call out to another HTTP API, that has an interface. If you retrieve data from a database, that database has an interface as well. Simply put, if there is a correct and incorrect way to call a service, it has an interface.
If your code reaches out to an external dependency you must assert your code calls the external dependency correctly given certain inputs. If you reach out to an API and it expects a certain query param, you must have a test that asserts the query param is provided to the http:request
processor. If all you do is mock the processor and provide its expected return value, a developer could remove the query param, effectively breaking the application, and the MUnit test would still pass. If you can break core functionality without your tests breaking, your tests are costing your team valuable resources while offering nothing in return.
External dependencies are not the only interface contracts you need to test against, every flow and subflow in your application has an interface contract: the Mule Event. The Mule Event is a kind of implicit interface. This is because it makes assertions about what the Event contains (a Mule Message with Attributes and a Payload, and flow variables), but it doesn't make many assertions about the constraints on those values. It's your job as a developer to write MUnit tests that make these assertions.
If you have a flow that expectes vars.now
to be set to a DateTime, then you should have at least one test that sets the variable vars.now
to a DateTime, then calls the flow. This communicates to the developer reading the test that the flow expects vars.now
to be set and that no other values in the Mule Event will be necessary. This rule applies to any attributes and/or payload that the flow needs as input in order to perform its task. This rule also applies to expected attributes, variables, and payloads associated with the Mule Event at the output of the flow. For these, you simply assert they are correct using MUnit assertions. If you wish to make assertions about values in the middle of the flow, you should use a Spy.
3) Write Testable Code
Writing testable code is more of an art than a science, so it's difficult to describe exactly what testable code is. Generally speaking, testable code:
- Is thoughtfully organized into small, easy to understand flows and subflows that each have a single responsibility
- Exposes values that need to be tested. Example: You need to assert a query param is correct for an
http:request
. You cannot hardcode the query param into thehttp:request
processor because MUnit has no visability into that processor. You must instead extract that query param as a flow variable and assert the variable is set correctly before calling thehttp:request
processor. You can use a Spy to accomplish this. An actual example is in the presentation. - Is deterministic (or can be mocked so that it is deterministic). Deterministic code is code that, given input
x
, always returnsy
, given inputa
, always returnsb
, etc. An example of non-determinstic code is aee:transform
processor that uses the result ofnow()
in its output. Every time that processor is called with the same inputs, it will return a different output. You can test this by simply asserting the value is a String or maybe a DateTime, but there is a much better way: assign the return value ofnow()
to a flowvar,vars.now
. When you do this, you've exposed the value in a way that it can now be mocked and set to the same value you every time your test runs. An actual example of this is in the presentation.
4) Eliminate the Noise
Noise is code or a test that cannot be linked back to a requirement or interface contract. Another example of noise is a test that doesn't assert anything useful ("Assert payload is not null" for example). Noise in code creates confusion among developers and can dramatically slow down the process of modifying existing applications. This happens because noise creates situations where developers must second-guess what code should doing. For example, if you set the event at the beginning of a test and include a myriad of attributes and variables that the flow being tested does not use, you've just created noise. When another developer comes along and reads your test to understand how the author expected the flow to work, they will have to question whether or not all of those unneeded values are important.
Don't create noise. If you find it, take the time to determine if it's needed and remove it if necessary.
5) Valuable Tests First, Code Coverage Second
Many developers will aim for a certain code coverage percentage as a metric of whether or not their code is effectively tested. This is often the result of a decree from management. They have good intentions, but there's a huge problem: you can achieve 100% code coverage while not asserting anything useful about how your application is supposed to work. If you mock everything, use one of those mocks to set a payload, then assert the payload is not null, you've just bumped up your test coverage (which will make your boss happy today) while creating a test that will cost the business more resources than it provides (which will make your boss and fellow developers mad at you later).
You should always aim to write all your valuable tests first, then worry about code coverage afterwards. Your valuable tests are the ones that communicate what inputs are expected of a flow or external dependecy call, and what the outputs should be given those inputs. If your team is prioritizing code coverage over having valuable tests, and you know better because you're smart and read this post, it's your resposibility to help them see why this is a problem. Feel free to send them here!
6) Code Reviews (not included in the presentation)
Finally, your team must be doing high quality code reviews. The developers reviewing code must understand the difference between a tests that adds value to your team, and a test that does the opposite. These developers reviewing code that understand what a high-value test looks like must be able to teach their team how to more consistently write these kinds of tests as well, and hold them to that standard.
Conclusion
In this post I've covered what a unit test is, why we write unit tests, and 6 simple steps of how to write valuable tests in MUnit. Aside from understanding the value of unit tests, the most important point is to test your interface contracts. API definition aside, the two most important sets of interfaces that you should always test will be:
- Calls to external systems (extract data from calling processor into flowVars as needed)
- Inputs and corresponding outputs to flows and subflows
Below I've added some miscellaneous tips that should help you, but they're not as important as the 6 I've defined above. As always, if you have any questions, please leave a comment below or reach out to me on LinkedIn!
- More complex code should probably have more robust tests, and maybe a high quantity of tests
- As a rule of thumb, your testing files will be 2-3x longer than the source code files they test.
- Test all paths. Generally flows with choice routers require 1 test per possible path. This includes exception scenarios.
- Testing DataWeave code: Often times it’s not necessary to test DataWeave code directly. Your interface tests should expose bugs in transformation logic. DataWeave transformers should be an implementation detail.
- Never mock, spy, verify, etc., on
doc:id
alone, always include at least thedoc:name
. - If you use a Spy on a processor, make sure you have an Verify assertion on the same processor. If the processor to which the Spy is pointing is never called, the Spy assertions will never be run. Adding a Verify assertion assures the test will fail when this happens.