Font Size: a A A

Research And Design Of Performance Evaluation Model For Crowdsourcing Test Platform

Posted on:2021-04-14Degree:MasterType:Thesis
Country:ChinaCandidate:M ZhuFull Text:PDF
GTID:2428330614957273Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Crowdsourced testing is a new software testing method,which has attracted extensive attention from academia and industry.In crowdsourced testing,test workers help software managers to perform tests and submit test reports.In this process,software managers need to manually review and evaluate the submitted test reports.In the process of evaluating the test report,the performance evaluation method is particularly important.The performance evaluation method reflects how the task requester or the crowd testing platform will evaluate the task completion of the test workers.Setting up a series of incentive mechanisms for testers can effectively enhance the enthusiasm of testers,thus improving the quality of test reports submitted by testers.The design of incentive mechanism must be carefully considered.Many workers try to finish a job quickly to maximize their profits,thus causing the reports they submit to be defective and poor quality.So as to enhance the quality of defective reports submitted by test workers and reasonably evaluate the work of test workers,this thesis studies the performance evaluation method based on crowdsourced testing platform,and mainly obtains the following three research results.(1)In order to solve the difficult problem of how to quantify the difficulty of tasks in the design of automation systems,this thesis come up with a approach of repeatability detection of defective reports based on text similarity.The repeatability of defective reports is used to measure the difficulty of tasks and is taken as an important indicator of performance evaluation.(2)In order to solve the problem of priority classification of defective reports submitted by test workers,this thesis proposes a priority classification model of defective reports based on depth learning,designs a text classification model based on CNN and BiLSTM,and conducts experiments on a large-scale open source project Eclipse defective library and achieves good results.(3)This thesis proposes a performance evaluation model of crowdsourced testing platform based on the difficulty of defective report and the priority classification of defective report.The model integrates the detection method of defective report repeatability and the classification method of priority,and successfully takes all evaluation indexes of the crowdsourced testing platform performance evaluation model into account,including the number of tasks,the difficulty of tasks and the priority of tasks.Comparing this model with the performance evaluation models of some typical crowdsourced testing platforms in the current market,the experimental results of the model are better than other evaluation models in all aspects.
Keywords/Search Tags:Crowdsourced testing, Defect reports, Performance appraisal, Repeatability, Priority
PDF Full Text Request
Related items