Friday, 28 March 2014

SFIA Skills framework

There is this cool skills framework which helps you in structuring your professional skills and knowledge, and our company adopted it for internal use. So when I recently decided to review what I know and where are the biggest gaps in my professional knowledge (which I love to do from time to time), I got really inspired.

It probably doesn't work for everyone, but for me it's a perfect fit. I use my imagination a lot to get the spirit of each skill level and apply it to the exact area of expertise I'm looking at (in this case - performance testing). It works kinda like Satisfice Heuristic Test Model: you don't get any instructions - instead you get directions to think about and tools to categorize and/or quantify this seemingly unquantifiable ball of knowledge, skills and experience you keep in your head.

So long story short, after I assessed myself against the next competency level for my job (that is, going from Intermediate to Senior), I got an impressive todo list for self-development. It's funny how I sometimes miss doing functional testing, but at the same time I'm glad I switched to performance testing, because it gave me so much motivation for learning. If I want to get as good in performance testing as I am in functional, I need to learn a lot, and I need to get all sorts of experience.

When I learn, I like to do it in big leaps of reading/trying, but I also like to switch between several different topics and give myself time to process new information on the background. This being said, currently my focus is Java performance. That includes both JVM tuning for performance and knowing how to write highly performant code in Java (algorithms, data structures, concurrency). I feel like when I'm finished I might be better in programming than a lot of actual Java developers. Or maybe not. Anyway, that would be awesome. I like keeping the balance between testing and developing skills shifting. And it's cool that some jobs actually require you to have it this way. E.g. Google's test engineers, who are pretty much really good highly technical testers. Or performance engineers. Or software developers developing testing frameworks.

Wednesday, 19 March 2014

How to make JMeter save results to a database

Our team is responsible for performance testing in the company, and JMeter is one of our major tools. Recently we understood that to make our job easier we need to save aggregated test results to a database instead of a file. There are different use cases for that, the most important being to run periodic automatic performance testing and graph the results to see if a new build dropped in performance. You can do it manually, sure, but saving results to a database makes it so much easier.

Turns out there is currently no plugin for JMeter (that we know of) that would allow you to do that. So a collegue of mine made a plugin to do just that. Then I got interested, and decided to do it a bit differently, and found a problem in his plugin, and... to make the long story short now we have two slightly different plugins that save aggregated results to a database.

I'll probably try and share source code in JMeter's Github once I'm not too lazy to figure out how to do that, but for now I just want to share few things I learned on the way. And boy, was it a harsh way. JMeter has the most unfriendly API I've ever seen (though to be fair I haven't seen much), it looks crazy, and I couldn't find answers to my questions in the internet when I needed them, so here you go. Maybe the next person who decides to make a fancy visualizer will benefit from this post.

So, you decided to write a visualizer that does something more than just calculating statistics differently or showing the data you already have in a prettier graph. Here's some tips that go beyound common sense and general ability to write code in Java and google.
  • Do not do any logic in your visualizer class, because in non-GUI mode JMeter will behave like your class doesn't exist, so you will get no logic whatsoever. Think about standard Aggregate report: in non-GUI mode all you can do is save every and each request to a file, you cannot just get an aggregated table. That's because aggregation happens in the visualizer code, and visualizer doesn't get initialized in non-GUI mode.
  • In non-GUI mode samples are being processed by ResultCollectors. This is where you want to put all your logic. To do that you need to implement two classes and integrate them with each other:
    • implement your visualizer (extend AbstractVisualizer)
    • implement your result collector (must extend ResultCollector, or it won't get started)
    • override "createTestElement" method in the visualizer and create your result collector. You must also override "modifyTestElement" and "configure" methods and make sure the proper constructor of your result collector is called. See the example 1 for "createTestElement" in the end of the post. If you don't do it, JMeter won't know that your two classes are connected. Basically this is where you say: okay, my GUI shows information which really comes from this TestElement, so please create this TestElement even when you skip the GUI.
  • In your result collector override the "sampleOccured" method and put all your aggregating results logic here. You also probably want to call "super.sampleOccured" in the beginning - this way you will get the standard logic (check if sample is okay, send it to visualizer) as well as your new one.
  • If you need to get settings from the GUI, make sure they are saved as properties of your result collector (and not as properties of your visualizer).
  • Keep in mind that you can only access properties after the constructor. Yep, cannot to it in constructor even if your result collector is fully initialized at that point. JMeter runs the "give test elements their properties from the jmx file" process only after it initialized those elements.
  • If you want to use JMeter variables (e.g. you have a field "Test run description", and you want to put some context dependent info there), be aware: you only get access to parsed variables in result collector, but not in the visualizer. In other words, getPropertyAsString will give you "${host}" if called from visualizer, though it will give you "192.168.7.15" if called from result collector for the same property.
  • If you want to know active number of threads (load level) you will have to calculate it. See example 2 below for what to put in the "sampleOccured" method in result collector or in the "add" method in visualizer to do that.

Promised examples.
example 1:
@Override
public TestElement createTestElement() {
            if (collector == null || !(collector instanceof DBResultCollector)) {
                collector = new DBResultCollector();
            }
            modifyTestElement(collector);
            return collector;
}

example 2:
loadLevel = Math.max(res.getAllThreads(), loadLevel);
//where res is SampleResult

Thursday, 6 March 2014

No old tests in regression testing?

Just the other day I was in Assurity, giving my favorite talk about efficient testing (based on "Rapid Software Testing" course and my personal experience). One of the arguments I made (after Michael Bolton in his time) was "use new tests other than old ones". Then someone from the listeners asked: but how do you do regression without using old tests? It got me for a second, because that was an awesome question, a new way to look at it. I mean, no one ever asked that in my presence. When you get question like that, it means that audience listened carefully, processed and understood what you were saying. It also means you get a chance to look at something from a new point of view.

So, I thought for a second, and then I realized: yep, I'm totally doing regression testing without using old tests. And that's how I do it: I don't use scripted test cases at all. What I use is a list of test ideas that structures my testing and serves as a backbone, as a starting point for my testing. So each time I do regression testing, I do it differently, because I don't have steps to follow. Each time I have to think: how would I test this thing, knowing what I know about it right now? So it's always a new test, and it's never out of date.

Another awesome thing was that after my answer people who asked me this realized that they do practically the same thing! They do have scripted testcases, but they never stick to them, and often they don't even read them. So they are actually doing new testing each time. "So why do you need steps written down?", - was my next question. I hope it will do some good. ;-)

I absolutely love it when I can actually say something useful to another tester and understand it immediately that it was useful. Far too often I get this feeling that a person listens to be polite and then goes about their old business, and maybe even uses some of my insights - but I never get to know about it.

Art of balancing

I love how in the process of making and releasing software you (as a team) constantly have to balance technical and business risks. You can't have it all, you most likely will have to go to production with a list of known issues, because if you don't do it, you might lose a client or a market. World moves fast and doesn't wait for anyone. I find that some testers and developers don't really get it, and they insist on blocking the release until all major (in their opinion) issues are resolved.

I say: don't block a release, but make sure that product owner knows of all the technical risks you are aware of, their likelyhood and severity, and the risk mitigation plan (which btw might include install/user guide instructions for workarounds). Let product owner decide if this risk is worth being taken. It might cost more to get rid of the issue than it would cost to fix it even if it indeed appears in production.

As a technical specialist I might strive for perfection, but as a problem-solver I always come back to practical solutions.