Showing posts with label ci. Show all posts
Showing posts with label ci. Show all posts

Saturday, September 5, 2020

Questions at at Taquelah - Does your functional automation really add value?

I spoke at Taquelah Lightning Talks on one of my favorite topics - 

Does your functional automation really add value?


 


You can find the slides here  - https://www.slideshare.net/abagmar/does-your-functional-automation-really-add-value

Some references:

https://essenceoftesting.blogspot.com/2020/07/does-your-functional-automation-really.html

https://essenceoftesting.blogspot.com/2020/03/tracking-functional-coverage.html

Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


Thursday, February 14, 2019

Talks and workshops in Agile India 2019


In the upcoming Agile India 2019 in Bangalore, I will be speaking about:






If you have not yet registered, you can use this code to get a discount on your registration - anand-10di$c-agile 

In addition, there are some great pre and post conference workshops as well. I will be participating in "Facilitating for Effective Collaboration...One Nudge at a Time" workshop - conducted by Deborah Hartmann Preuss and Ellen Grove


This is going to be one amazing conference to learn, network and share ideas and experiences. See you there!


.

Monday, February 11, 2019

Test Automation in the World of AI and ML

My article on "Test Automation in the World of AI & ML" recently got published on InfoQ.


Here are the key takeaways mentioned in the article -

  • There are many criteria to be considered before building framework / selecting tools for Functional Test Automation
  • It is very important to prioritise framework / tools capabilities needed for the software-under-test
  • A good, scalable Test Automation Framework that provides fast and reliable feedback to the team enables collaboration and CI/CD
  • Debugging / RCA (root cause analysis) and support for libraries / tools used is an afterthought in most cases. Do not fall in that trap.
  • There are some promising commercial tools that fit seamlessly in the Agile way of working. Depending on the complete context, these tools may be a good choice over building your own framework for Functional Automation.

You can read the full article from here

Looking forward to comments on the same!


.

Tuesday, May 2, 2017

Criteria for setting up a Mobile Test Automation LAB

I recently got asked this question related to the MAD LAB (Mobile Automation Devices LAB) - "Would like to understand how can we setup something similar in our organisation?"

Since this question is applicable for all those thinking of, or have already set up their own lab, thought I would share my answer here.

To setup your own LAB for Mobile Test Automation, multiple things need to align:


Supportive management who -
  • allows experiments (within reason of course) and encourages learning through failure, 
  • willing to invest in infrastructure ($$)

Skilled and Passionate team members who -
  • understand the domain well, 
  • willing to learn, experiment, re-learn and fail fast, 
  • keep looking for innovative solutions to solve problems on hand, 
  • do not reinvent the wheel. 

Philosophy aside, our MAD LAB has the following: 
  • Mac Minis (8-12 devices per Mac Mini), 
  • Powered USB Hubs (I use the ones shown below - and they are working pretty well)

  • High-quality USB cables (I use the ones shown below - and they are working pretty well)
  • CI (Jenkins) setup correctly to keep running tests continuously, proper reporting  in place (else whats the use of running tests if you do not look at the results)

You could start with similar IF it fits your product-under-test context

After I answered this on LinkedIn, I realised, there are more parameters to think about, than just the above.
  • Knowing which devices to use in your Lab
  • Having good, reliable Internet connection
  • Devices should be "seen" easily
  • Should be easy to work on / with the devices as and when required
  • Know how you the devices will be placed in the lab. We tried the following:
    • 2-way tape - that didn't work. Devices used to stay up for a few days, then "drop" suddenly. Of course, that also depends on the back surface of the devices.
    • We tried many mobile stands / hangers (shown below) - but each had their own limitations



    • Finally I found an industrial-strength velcro (1" velcro tape that could take a couple of pounds of weight) - and my devices have not budged since. PS: Please be careful when putting on this velcro on the devices. IF it gets on your hand, you will have a velcro tattoo for a long long time.

What other parameters would you consider for setting up your own Lab? Looking forward to the comments below.


Tuesday, February 14, 2017

Features of my Android Test Automation Framework

[UPDATED - Added link to implementation details at the end of the post]

As I have shared in my previous few blog posts (A new beginning - entertainment on mobile, How to enable seamless running of appium tests on developer machines?), a few months ago, I embarked on a new journey as "Director - Quality" for the Viu product at Vuclip.


Here are a few details about our Viu app:
Viu offers high quality, popular, regional video content in various different languages for consumers in various different regions - Indonesia, Malaysia, India, Middle-East, Egypt ....
The consumers on the move could be using Android devices or iOS devices to watch their favourite content - either stream it directly via their Mobile Data plan, or they could have downloaded their preferred video at home / office and watch the saved video later.
The very interesting, and, challenging bit from Testing perspective, is that the app would behave differently based on from which part of the world the user is using it. This logic is not based on Geolocation.

Objective

My objective from functional test automation is:
  • Get quick feedback, from every build and know if the app is working well. If NOT, then be able to get to the root cause ASAP.
  • Provide feedback and confidence to stakeholders (Product team, Business team, etc.) on the "true" state of the product. There should be no surprises, for anyone!
  • Run tests seamlessly by simulating different regions

Approach

To achieve this - I choose to go with the following tech-stack:
  • Test Intent specification - cucumber-jvm - to specify the business intent, in the business terminology (expected to be) understood by all.
  • Reports - cucumber-reports provide good, rich, meaningful and easily understandable reports which provide meaningful information about the state of the product.
  • Device interaction - Appium / Java - implement the intent specified by cucumber-jvm. This will also allow us the flexibility to test against Android, iOS and Web.
  • Continuous Integration (CI) server - Jenkins - to be able to run tests automatically when a new build is ready. Also, we integrated the cucumber-reports via the cucumber-reports plugin directly in Jenkins - so now all stakeholders interested in the reports just need to go to one place and get all information, in real time. No more need for Test Reports to be generated manually and sent across for everyone.

Test Execution environment

I looked at various cloud-based service providers (SauceLabs, Test Object, pCloudy, AWS Device Farm, etc.) who could run our tests on real-devices in their data centers. These tests would need to be triggered automatically via Jenkins when a new build (apk) is available, or as we add more tests. However, none of these providers could satisfy my requirements (without having to compromise significantly on my objectives).

Hence, finally decided to setup my own private Test Lab for this. Oh fun!

Framework Architecture

Below is the architecture of the framework that I came up with. This is based on the architecture I posted in my earlier blog post on “Assertion & Validations in Page Object” (https://essenceoftesting.blogspot.com/2012/01/assertions-and-validations-in-page.html)
ViuTestFrameworkArchitecture.png

Approach, Features & Capabilities of the Test Lab

  • For Jenkins configuration, we configure the job using Jenkins file - to ensure our Jenkins configuration is also under version control. Also, since this is groovy scripting language, we can have good logic and processing done as part of the job execution.
  • This helped us keep the configuration consistent and common across any type of job we had to run.
  • The Jenkins job will trigger a set of commands on the Jenkins Agent.
  • Jenkins Agents are setup on Mac Mini machines. Each Agent has only 1 executor. This is essential to prevent using a device that is already in use by another job / executor / agent.
  • The Mac Mini (more to be added on demand) has a powered USB hub which connects upto 8 devices
  • Depending on types of devices connected, we have as many Jenkins Agents (with 1 executor only) allocated for them.
Example:
    • Samsung devices (with API Level 22) allocated to Jenkins agent - viu-e2e-samsung
    • Motorola devices allocated to Jenkins agent - viu-e2e-moto

Infrastructure

  • To manage and use the devices allocated to each node and more important - prevent other nodes using other devices, each node in Jenkins has an environment variable in its configuration - CONNECTED_DEVICE_IDS - a comma separated list of devices allocated to this node.
    • This approach makes it easy to change / add / remove devices on the fly. All we need to do in such cases is update the device ID in the appropriate node’s CONNECTED_DEVICE_IDS environment variable
    • Our gradle file, reads the CONNECTED_DEVICE_IDS environment variable, and finds devices connected on the Mac Mini matching the provided device ids. This simple technique allows specific device allocation from the pool of devices to each node.
    • PS: If anyone does an error and provides the same device id for multiple nodes, we all know what will happen. These are areas where we need to be very cautious in our execution and maintenance.
  • The URL for the Android APK file is also passed to the gradle file as a an environment variable. We download the APK file once before test execution starts.
  • Functional (end-2-end) Automation is painfully slow to run. To get the feedback quickly, we have to run this in parallel. All existing approaches to running cucumber-jvm tests in parallel failed to meet our requirement. I wanted to run each scenario in parallel, on whichever device is free (from the matching devices list). Eventually ended up writing a small script that allows me to run scenarios in parallel.
  • Managing Appium Server (start / stop) is another important activity - which is required to be done just once per device. Gradle manages that as well.

Next steps

  • Stabilize parallel test execution (problems with Android 7)
  • Start with iOS app automation
  • Start with Web automation (www.viu.com)
  • Share gradle file with community, if anyone interested.
  • [UPDATED] You can find the details of the implementation here