Monday, May 14, 2018

Testing Android voice apps automatically


Let’s review AndroidViewClient/culebra concertina mode features, compare them with monkey, and see how we can use those features to test voice based UIs or Alexa Skills.

Unlike monkey, which sends pseudo-random events, Culebra concertina mode analyzes the content of the screen and randomly selects a suitable event or action for the also randomly chosen target, usually a View.
Also, unlike monkey, the generated actions and their parameters are saved in a python script that can be later executed as many times as needed.

Read more at https://medium.com/@dtmilano

Wednesday, February 21, 2018

Testing Alexa Skills — Autogenerated tests

You almost finished your Amazon Alexa Skill and are now started the quest for the Holy Grail of Alexa Testing. Now, you are desperately searching for a way to automate it. Even, googling it gave no obvious outcome.
Fortunately, your search is over.


Now we will be analyzing how we can automate the code generation of such tests. Because some of the details needed to create the tests are available in the Skill’s Interaction Model, we will leverage this to reduce to a bare minimum the information that you have to provide to create a test.


Thursday, January 25, 2018

Testing Alexa Skills — The grail quest


You almost finished your Amazon Alexa Skill and are now started the quest for the Holy Grail of Alexa Testing. Now, you are desperately searching for a way to automate it. Even, googling it gave no obvious outcome.
Fortunately, your search is over.

Read the article on medium: https://medium.com/@dtmilano/testing-alexa-skills-the-grail-quest-3beba82450bb