QAOps on AWS with Packer, Terraform and Robot Framework


Before we start, we must advise you that in this article we are not going to cover specific details about testing techniques, for that we understand that are many formidable sources on the internet.

For some, when they hear the word ‘QAOps’ for the first time they might think: another (whatever)Ops term? Seriously?

Yes, but let me explain what is QAOps, why it is so important and why it is a quality assurance trend for the next years. We all agree that software quality plays a crucial role in pretty much every single ‘digital’ company across the world and even so it’s for many somehow completely neglected and obfuscated by the aim to deliver content fast and to spare money by reducing costs — e.g. by cutting tests from the project (it’s a comic and destructive common mindset).

There’re many companies that have literally zero automated tests, not even the required (it should be) unit tests. More than just a good practice, it’s good common sense. Any software delivery company should worry about trying to guarantee at least the minimum quality, we know that only production is production and it’s practically impossible to predict/avoid all possible problems, but again, it’s good sense: developers come and go, software evolves, infrastructure changes, business changes… so why not to try to keep everything working during the entire product lifecycle (yes, software doesn’t just freezes after promoted to the production environment, in fact it’s quite the opposite).

In a more advanced use case, imagine if we are planning to automate our acceptance tests suite with hundreds of scenarios because our manual cycles are taking too long. Soon you realize that is necessary to provision and to manage a whole infrastructure stack just to support the environment that will run our tests. This will take time and for most companies infrastructure is managed by an outsourcing company, which might introduce some layers of complexity to this recipe. All this nightmare becomes worse when we need to keep the environment as similar as possible with production, and we need to apply security patches and baselines, software upgrades, handle data center maintenance and we’ll have to deal with other problems that exists because it’s the way it is, period. Suddenly this environment is no longer reliable and test execution is becoming (again) a bottleneck so the unavoidable happens, you (or your manager, or your manager’s manager — you got the point) decide to skip the test stage because you need to deliver and you can’t afford having your team (or someone else’s team) busy trying to figure out why things are no longer working as expected. There’s also another villain in this story, if you are running your workloads in a cloud provider you eventually will face that this environment costs and someone (…) may decide to cut this extra charge because it doesn’t make sense to pay for something that doesn’t bring any good.


We briefly talk about why tests are so important, now it’s time to talk about what is QAOps. But first, just a reminder about what is our main goal here:

To deliver software quickly and with quality to be able to respond to business changes, bringing time to market and a bigger ROI for each delivery cycle.

QAOps is an enabler for that, it’s about having your QA stages integrated in your delivery pipeline, enabling continuous testing and quality feedback without having silos between test engineers, configuration managers, infrastructure teams, developers and project managers. In fact it’s more about culture than technology, you need to establish rules and define processes that will protect your software and hold everything together so everyone will be able to observe, learn and contribute.

One of those processes is test automation, from unit tests to acceptance tests or more advanced cases like vulnerability scan tests and performance tests (the testing pyramid gives us a good overview of what we should spend time/money testing).

Unfortunately tests alone don’t create a great impact on software delivery but if they are somehow inserted into the delivery process/pipeline, we can extract huge value from this investment (yes, be aware that automation is not an easy task and for sure it’s not for free, even if you use OSS tools you still need to pay for your employees or for a consulting company — but don’t forget the word investment).

By bringing together infrastructure as a code, tests execution automation, quality gates to enforce desired levels of software quality, collaboration tools to enable quick feedback and a good CI/CD tool to schedule and control flow execution then we are able to achieve our desired goals. Infrastructure now is immutable, automated and completely monitored so you (or anyone) can see what’s going on under the hoods, no more “someone broke the QA environment”, just destroy it and recreate everything as you wish. No more “I was in a hurry and I missed that specific test scenario”, tests are automated and we have strict coverage rules to ensure that. No more surprises with “a CVE was found in a library that wasn’t updated since JQuery 1.0.0…”, quality gates are there not to make things more difficult, but to improve continuously our software. No more “we didn’t notice that the QA environment wasn’t running for 2 weeks”, feedbacks will ensure that problems, test failures and changes will be shared with every stakeholder, we will fail and we must do it fast.

From mobile tests, to SPA (Single Page Applications) or even desktop tests, QAOps will bring value to your software by enabling you to:

  • Run parallel tests by creating as many environments as you need
  • Save money by not having testing environments running unnecessarily
  • Schedule nightly long-running tests
  • Test critical paths and common bugs
  • Virtualize your environments

Of course that are some requirements and you probably wont’t like what you’re about to read:

Your infrastructure needs to be agile and you cant’t have IaC (Infrastructure as a Code) without it! So if it will take for you two weeks to have a new ESX provisioned host or 8 hours to install a software package or 3 days to create a virtual server on F5 load balancer, you will suffer and this barriers will stop you from focusing on what is important. Microservices architecture, cloud providers like AWS, storage services (DynamoDB, Aurora, S3) and infrastructure abstraction (ECS, EKS, Fargate) will enable you to be agile and declarative (consequently under source control). If you don’t have any of these elements, maybe trying to approximate the infrastructure team will help you to gain speed, confidence and most important to earn their trust in your tools.

Technology brick

Having a consistent set of technologies, choosen and approved by an architecture team is a valuable move in order to enable automation and avoid wasting time with dead ends. When talking about testing tools the story is the same, you need to avoid vendor lock-in, try to keep a desired layer of abstraction so everyone can take advantage of your software and you can evolve without much coupling, to be agnostic when it’s possible and do not reinvent the wheel.

There’re many battle tested tools that follow good open standards with strong community support and official support (if this is a requirement).

Some of these tools and their roles are described on the picture below:

Continuously testing with IaC

Here are more examples of what we can achieve with some simple, but powerful tools:

SPA, Mobile and Desktop test automation

For the test framework, Robot Framework is a great and flexible testing tool that can help us to keep our test suites clean, to reuse code blocks and to focus on our business.

Below there’s an example of a simple Windows Calculator test written in robot:

Honorable mention — DataOps

Please don’t panic, buzzwords or not, there’re some terms that help us to know what we need to do in order to achieve some specific goals and DataOps is one of them.

Good practices of DataOps will be necessary to (through automation) feed our environments with good, secure and reliable data from production so everyone (from data scientists to testers) can take advantage of it.

Cloud Specialists providing professional services with DevOps, BigData, Cloud Native Applications and Security.