Showing posts with label android. Show all posts
Showing posts with label android. Show all posts

Saturday, January 15, 2022

Making Android Studio Emulator work on Mac with M1 chip

Like many who have been using Apple Mac (Pro) with the new M1 chip, I am also very impressed and happy with the laptop. However, there have been challenges in the initial days with some software not working out of the box on this new hardware. 

While most of the issues got resolved, I ran into some very weird issues with one specific software that was essential for my work - running Android emulators off Android Studio.

I saw various errors like:




Eventually, there was an upgrade from Android Studio specifically for Mac M1 chip - the latest one from the website (as of 15 Jan 2022) is version Android Studio Arctic Fox (2020.3.1) Patch 4 (android-studio-2020.3.1.26-mac_arm.dmg).

For some time, I was able to create the emulators and use it. However, something weird happened along the way, and this also stopped working - and I started seeing the same errors as before.

After countless failures from installs, uninstalls of various versions of Android Studio , and the SDK Manager for Android (including different emulator versions), I took the latest Android Studio Chipmunk (2021.2.1) Canary 7 release of Android Studio (android-studio-2021.2.1.7-mac_arm.zip) from Android Studio download archives and that seemed to work out of the box (with my earlier downloaded Android SDKs and Emulator).

Note: This is based on my experiences on Mac OSX Big Sur v11.6.2

Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


Friday, October 11, 2019

Overcoming chromedriver version compatibility issues the right way

I encountered an interesting challenge recently when doing Native Android / iOS app automation - this was related to Chrome browser versions getting updated automatically and my tests failing because of errors like:


org.openqa.selenium.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 74
23:04:25 (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)


So I asked a question on LinkedIn


And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.






The answer was common and obvious – use WebDriverManager. This is a beautiful, simple and indeed the right answer and solution to the problem.

However, that was a partial answer for me. 

Here is my context and problem statement in detail:

  • My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD) 
  • ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
  • In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
  • Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
Once I was very kindly reminded by the community about WebDriverManager (which I had forgotten about), I now knew what was to be done.

I looked at the ATD code, and realised that it was using the default chromedriver version as setup when I had installed appium. This chromedriver was being used when instantiating a new instance of the AndroidDriver.

So I submitted a PR for ATD - which essentially did the following:
  • Query the chrome browser versions on each connected device
  • For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
  • Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
**highest version - what does that mean? Well, I also got confused initially. But the answer was simple. On some devices, the Chrome browser is installed by default, as a system app. This cannot be removed. So as new versions of the browser get installed, the default Chrome system app is always there. So when you query for the versions of Chrome on the device, you will see 2 such versions. My code logic was to get all these versions, and pick the highest version from them.

Here is the code snippet of how I solved the problem:
Special thanks to Sai Krishna for quickly approving and merging this PR.

Hope this provides more information about my problem statement, and how I used your suggestion for WebDriverManager to solve the problem.


Thursday, February 14, 2019

Talks and workshops in Agile India 2019


In the upcoming Agile India 2019 in Bangalore, I will be speaking about:






If you have not yet registered, you can use this code to get a discount on your registration - anand-10di$c-agile 

In addition, there are some great pre and post conference workshops as well. I will be participating in "Facilitating for Effective Collaboration...One Nudge at a Time" workshop - conducted by Deborah Hartmann Preuss and Ellen Grove


This is going to be one amazing conference to learn, network and share ideas and experiences. See you there!


.

Friday, October 26, 2018

Agile Testing, Analytics Testing and Measuring Consumer Quality from Poland and USA

The last few weeks have been very hectic for me. In between my consulting assignments, I traveled to Krakow, Poland for Agile & Automation Days 2018, and then to Arlington, Virginia in USA for STPCon Fall 2018.

In the Agile & Automation Days 2018 conference, I spoke about "Measuring Consumer Quality - The Missing Feedback Loop" and conducted a 1/2 day workshop on "Analytics Rebooted - A Workshop".

In STPCon Fall 2018, I conducted 2 workshops - 1/2 day each - "Practical Agile Testing Workshop" and "Analytics Rebooted - A Workshop" and also spoke about "Measuring Consumer Quality - The Missing Feedback Loop"

Overall, I had a very good trip, amazing conversations and interactions with the attendees and the speakers. I would be lying if I say I am not tired and my throat has gone sore. But, would I do this again? Absolutely! Going to conferences and meeting people, sharing my experiences with them, and learning from their experiences gives me a lot of happiness and satisfaction.

Below are the abstracts of the workshops and the talk. 

Contact me via LinkedIn, or twitter, or my site - essenceoftesting.com if you need any additional information, or if you want help in learning / implementing these or other topics related to Quality / Testing / Automation.



Practical Agile Testing Workshop

Workshop Description:

The Agile Manifesto was published in 2001. It took the software industry a good few years to truly understand what the manifesto means, and the principles behind it. However, choosing and implementing the right set of practices to get the true value from working the Agile way has been the biggest challenge for most!

While Agile is now mainstream, and as we get better at the development practices to “being Agile”, Testing has still been lagging behind in most cases. A lot of teams are still working in the staggered fashion (a.k.a. Iterative waterfall way of working). Here teams may be testing after development completes, or Automation is done in the next Iteration / Sprint, etc.

In this workshop, we will learn and share various principles and practices which teams should adopt to be successful in testing (in-cycle) in Agile projects.

Workshop Agenda:
  • What is Agile testing? - Learn what does it mean to Test on Agile Projects
  • Effective strategies for Distributed Testing - Learn practices that help bridge the Distributed Testing gap!
  • Test Automation in Agile Projects - Why? What? How? - Why is Test Automation important, and how do we implement a good, robust, scalable and maintainable Test Automation framework!
  • Build the "right" regression suite using Behavior Driven Testing (BDT) - Behavior Driven Testing (BDT) is an evolved way of thinking about Testing. It helps in identifying the 'correct' scenarios, in form of user journeys, to build a good and effective (manual & automation) regression suite that validates the Business Goals. 
Key learning for participants in this workshop:
  • Understand the Agile Testing Manifesto.
  • Learn the essential Testing practices and activities essential for teams to adopt to work in Agile way of working.
  • Discover techniques to do effective testing in distributed teams.
  • Find out how Automation plays a crucial role in Agile projects.
  • Learn how to build a good, robust, scalable and maintainable Functional Automation framework.
  • Learn, by practice, how to identify the right types of tests to automate as UI functional tests - to get quick and effective feedback.




Analytics Rebooted – A Workshop

Workshop Description:

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of # understand their Consumers (engagement / usage / patterns / etc.), # understand usage of product features, and, # do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!

What this means is Analytics is more important now, than before.

In this workshop, we will not assume anything. We will discuss and learn by example and practice, the following:
  • How does Analytics works (for Web & Mobile)? 
  • Test Analytics manually in different ways 
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see demo of the Automation running for the same.
  • Time permitting, we will setup running some Automation scripts on your machine to validate the same



Measuring Consumer Quality – The Missing Feedback Loop

Session Description:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organization, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about “loosely” - is the feedback loop from your end-users to making better decisions.

SO, what is this feedback loop? Is it a myth? How do you measure it? Is there a “magic” formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:
  • The importance of knowing your Consumers
  • How do you know your product is working well?
  • How do you know your Consumers are engaged with your product?
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behavior, before making any code change?

Attendees will have deeper understanding and appreciation of the following:
  • What is Consumer Quality and how does it help shape your business!
  • Ways to measure Consumer Quality
  • Why is understanding Consumer Engagement vital to the success of your product


Tuesday, October 10, 2017

Analytics - the forgotten child!

After a long time, I spoke about What, Why and How of Analytics Testing at Selenium Conference, Berlin 2017.

This talk was initially supposed to be focussed on Web Analytics only, with impact on / of IoT (Internet of Things) and Big Data, but my recent experiences made me realise, the learnings could easily be applied to Analytics from Mobile native apps as well.

So against better judgement, a full 30 minutes before I was supposed to go on stage, I started a revamp of the slides to include more content, which also meant a complete change of flow of the talk / slides. Talk about making stupid decisions, but thankfully, it turned out pretty ok!!

Abstract of the talk:

What is Web Analytics and why is it important? We'll walk through techniques for manually testing your data and automating the validation process.
Just knowing about Analytics is not sufficient for business now. There are new kids in town - IoT and Big Data - two of the most used and well-known buzz words in the software industry! With a creative mindset looking for opportunities to add value, the possibilities for IoT are infinite. With each such opportunity, there's a huge volume of data being generated which, if analysed and used correctly, can feed into creating more opportunities and increased value propositions.
There are 2 types of analysis that one needs to think about:
  1. How is the end-user interacting with the product? - This will give some level of understanding into how to re-position and focus on the true value add features for the product.
  2. What are the patterns in the data? - With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns and find out new product and value opportunities based on these.

Video from the talk:



Slides from the talk:



Tuesday, August 22, 2017

NullPointerException from RemoteWebElement in Selenium via Appium Java-Client 5.0.0-BETA9

As you may be aware from my previous posts about MAD-LAB, we are using Appium, with Java-Client 5.0.0-BETA9 to automate user journeys of the VIU app on Android & iOS devices.

Last week, suddenly, while in the middle of doing another round of significant changes to support more capability in the test framework for the Android app, the tests started failing. All infrastructure pieces were working fine, but when the App launched, I started getting this error:

ERROR AndroidLanguageScreen:16 - [5203bb1ae2771425] - ERROR in clicking on androidElement - 'By.id: tv_one' - exception - 'null'
java.lang.NullPointerException

The code in question was - driver.findByElement(myElementLocator).click()

On further investigation, it seemed that there was a problem in doing any interaction with the app, not just "click".

After lot of racking my head, asked a colleague to see if the problem reproduces on her machine. As she had not run the tests on her machine since a few days, as soon as she ran the test execution command, soon the same error happened on her machine as well. Interestingly though, we observed the following trace in her machine's console logs:

------------
Packages that were updated:


Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-support/3.5.1/selenium-support-3.5.1.pom
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-api/3.5.1/selenium-api-3.5.1.pom
Download https://repo1.maven.org/maven2/com/google/guava/guava/23.0/guava-23.0.pom
Download https://repo1.maven.org/maven2/com/google/guava/guava-parent/23.0/guava-parent-23.0.pom
Download https://repo1.maven.org/maven2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.pom
Download https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.pom
Download https://repo1.maven.org/maven2/com/google/errorprone/error_prone_parent/2.0.18/error_prone_parent-2.0.18.pom
Download https://repo1.maven.org/maven2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.pom
Download https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.pom
Download https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-parent/1.14/animal-sniffer-parent-1.14.pom
Download https://repo1.maven.org/maven2/org/codehaus/mojo/mojo-parent/34/mojo-parent-34.pom
Download https://repo1.maven.org/maven2/org/codehaus/codehaus-parent/4/codehaus-parent-4.pom
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-remote-driver/3.5.1/selenium-remote-driver-3.5.1.pom
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-support/3.5.1/selenium-support-3.5.1.jar
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-api/3.5.1/selenium-api-3.5.1.jar
Download https://repo1.maven.org/maven2/com/google/guava/guava/23.0/guava-23.0.jar
Download https://repo1.maven.org/maven2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar
Download https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.jar
Download https://repo1.maven.org/maven2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar
Download https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.jar
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-remote-driver/3.5.1/selenium-remote-driver-3.5.1.jar
:buildSrc:compileJava UP-TO-DATE
------------

This trace meant that something had changed in the dependencies (automatically), and gradle was fetching newer versions for the same.

This was a smoking gun we were looking for. On investigation for selenium 3.5.1 with appium java-client 5.0.0-BETA9, it quickly showed only 1 hit in search result - which was a bug reported on Java-Client 5.0.0-BETA9 - Warning: Selenium 3.5.1 breaks java client 5.0.0-BETA9

The solution / workaround was also already provided by QAutomatron

configurations.all {
    resolutionStrategy {
        force 'org.seleniumhq.selenium:selenium-support:3.4.0',
                'org.seleniumhq.selenium:selenium-api:3.4.0'
    }
}

This resolved our issue for now.


Saturday, July 29, 2017

Why I needed to build my own MAD-LAB

I spoke about "Build your own MAD-LAB - for Mobile Test Automation" at vodQA - The Saga Continues! at Vuclip in collaboration with ThoughtWorks on Sat, 29th July 2017.

Join the vodQA group on facebook / LinkedIn to be part of the vodQA community.

Here are details of the talk:

Description

Building a real-(mobile)-device lab for Test Automation is NOT a common thing – it is difficult, high maintenance, expensive! Yet, I had to do it!


Setting the stage - I am coordinating all Testing activities for VIU - an OTT (over-the-top entertainment) product available on Android, iOS and WAP platforms. This product delivers high quality, popular video content in various different languages for consumers in various different regions. One of the main items in my charter is to implement functional test automation for consumer / user functionalities, and to provide quick feedback to the team and stakeholders on the “true” state of the product on all supported platforms for VIU.


In this talk, using the above set context, I will be sharing the following:
  • The automation strategy
  • Chosen tech-stack
  • How (and why) no cloud-based solution worked for me
  • Implementation details - MAD-LAB - which arose from the learnings of the failed experiments - which resulted in setting up my own real-device in-house lab.
  • The core implementation (code) of MAD-LAB (already open-sourced)

Takeaways for attendees

  • Learning from my experiments (what worked, or didn’t)
  • Approach to testing an OTT (entertainment domain) product
  • How to build a Test Automation Framework using cucumber-jvm / Appium
  • Implementation details to Manage Devices, Optimizing Test Execution via distribution, Appium server, Custom Reporting etc., enabling automatic test execution via CI on each new app build, and more.

Slides

Video (talk starts at 04m:45s)




Friday, June 9, 2017

Changing logcat buffer size in Android devices ... almost works

My (debug-build of) app under test logs extra information about test execution to system logs which is accessible via logcat on Android devices. This is very powerful as now I can run my cucumber-jvm / Appium tests, copy the logcat file after the test execution completes, parse it for relevant information, and do appropriate assertions on the same.

The default buffer size on Android devices I have seen is 256kb. This is less for me - as I end up losing the earlier information, and hence my assertions fail.

Thankfully, there is a programmatic way to change the logcat buffer size in the device before running tests. The command is

adb logcat -G 3M

This adb command works in the Motorola devices in my MAD LAB, but does not work in Samsung devices. The error I see on running the above command is "failed to set the log size"

Any idea why this would not work in Samsung devices? or rather, what do I need to do to change the logcat buffer size?

[UPDATE] - Interestingly - this works on Samsung Galaxy S7, but NOT in Samsung J5 Prime OR Samsung J7 Prime

Tuesday, May 2, 2017

Criteria for setting up a Mobile Test Automation LAB

I recently got asked this question related to the MAD LAB (Mobile Automation Devices LAB) - "Would like to understand how can we setup something similar in our organisation?"

Since this question is applicable for all those thinking of, or have already set up their own lab, thought I would share my answer here.

To setup your own LAB for Mobile Test Automation, multiple things need to align:


Supportive management who -
  • allows experiments (within reason of course) and encourages learning through failure, 
  • willing to invest in infrastructure ($$)

Skilled and Passionate team members who -
  • understand the domain well, 
  • willing to learn, experiment, re-learn and fail fast, 
  • keep looking for innovative solutions to solve problems on hand, 
  • do not reinvent the wheel. 

Philosophy aside, our MAD LAB has the following: 
  • Mac Minis (8-12 devices per Mac Mini), 
  • Powered USB Hubs (I use the ones shown below - and they are working pretty well)

  • High-quality USB cables (I use the ones shown below - and they are working pretty well)
  • CI (Jenkins) setup correctly to keep running tests continuously, proper reporting  in place (else whats the use of running tests if you do not look at the results)

You could start with similar IF it fits your product-under-test context

After I answered this on LinkedIn, I realised, there are more parameters to think about, than just the above.
  • Knowing which devices to use in your Lab
  • Having good, reliable Internet connection
  • Devices should be "seen" easily
  • Should be easy to work on / with the devices as and when required
  • Know how you the devices will be placed in the lab. We tried the following:
    • 2-way tape - that didn't work. Devices used to stay up for a few days, then "drop" suddenly. Of course, that also depends on the back surface of the devices.
    • We tried many mobile stands / hangers (shown below) - but each had their own limitations



    • Finally I found an industrial-strength velcro (1" velcro tape that could take a couple of pounds of weight) - and my devices have not budged since. PS: Please be careful when putting on this velcro on the devices. IF it gets on your hand, you will have a velcro tattoo for a long long time.

What other parameters would you consider for setting up your own Lab? Looking forward to the comments below.