| Mobile applications play an important role in our daily life.Statistics show that an average user spends more than 2 hours a day on mobile applications and the time is increasing.However,it still remains a challenge to guarantee their correctness.Automated testing of mobile applications is mainly about generating event sequences and the different orders of events will sometimes lead to different results.A daily used app usually contains plenty of interfaces and events which results in the large combinational space of possible events and transitions.It will be time-consuming to explore all the functionalities and states.What's more,the apps are developed under the time-to-market pressure which makes it necessary to implement automated testing tools with high efficiency.Model based and systematic strategies are applied to Android GUI testing.The leverage of models decreases the execution of redundant events while evolutionary algorithm and symbolic execution makes it possible to generate specific inputs for hardto-reach functionalities.Compared to random testing,these two kinds of strategies guide exploration more efficiently.However,when compared with random strategies in experiments,their advantages are not obvious and the strategies still need improvement.Model based strategies generate test cases according to application models which are constructed with a static or dynamic approach.In this case,high-quality models are extremely important in order to achieve a good testing result.However,it is challenging to explore all the states of an Android application.What's more,Android apps are quite flexible and some undetermined transitions,which have different results in different context,are hard to depicted by existing works.Systematic approaches are mainly designed to reveal typical functionalities that are hard to execute with other strategies,but they are less scalable and often perform worse in overall testing metrics like code coverage and bug revelation.In order to tackle the aforementioned challenges,we novelly propose Q-testing,a reinforcement learning based approach which benefits from both random and modelbased approaches to automated testing of Android applications.Q-testing explores apps with curiosity-driven strategy.The involvement of neural network makes it possible to distinguish different functional scenarios and to guide the testing tool towards unfamiliar functionalities.In this paper,we make the following main contributions:1.We propose a novel curiosity-driven exploration strategy,which is named Q-testing,based on reinforcement learning to guide Android automated testing.Q-testing utilizes a memory set to record part of previously visited states and guides the testing towards unfamiliar functionalities.The strategy helps to mitigate the problems of unbalanced testing and the heavily reliance on models.2.We collect samples and train a neural network,which can effectively distinguish different states at the functional level in an efficient manner.Q-testing leverages this neural network to calculate reinforcement learning reward and it can be used not only in reinforcement learning.Other tasks including state compression and code recommendation may also benefit from it.3.We implement a tool and conduct a large-scale experiment.Results show that our approach outperforms existing ones including Monkey,Stoat and Sapienz in terms of both code coverage and fault detection.So far,22 of our reported faults have been confirmed,among which 7 have been fixed. |