This paper uses three size metrics,which are collectable during the design phase,to analyze the potentially confounding effect of class size on the associations between object-oriented(OO)metrics and maintainability.To draw as many general conclusions as possible,the confounding effect of class size is analyzed on 127 C++ systems and 113 Java systems.For each OO metric,the indirect effect that represents the distortion of the association caused by class size and its variance for individual systems is first computed.Then,a statistical meta-analysis technique is used to compute the average indirect effect over all the systems and to determine if it is significantly different from zero.The experimental results show that the confounding effects of class size on the associations between OO metrics and maintainability generally exist,regardless of whatever size metric is used.Therefore,empirical studies validating OO metrics on maintainability should consider class size as a confounding variable.
In order to reduce knowledge reasoning space and improve knowledge processing efficiency, a framework of distributed attribute reduction in concept lattices is presented. By employing the idea similar to that of the rough set, the characterization of core attributes, dispensable attributes and unnecessary attributes are described from the point of view of local formal contexts and virtual global contexts. A determinant theorem of attribute reduction is derived. Based on these results, an approach for distributed attribute reduction is presented. It first performs reduction independently on each local context using the existing approaches, and then local reducts are merged to compute reducts of global contexts. An algorithm implementation is provided and its effectiveness is validated. The distributed reduction algorithm facilitates not only improving computation efficiency but also avoiding the problems caused by the existing approaches, such as data privacy and communication overhead.
This paper proposes a checking method based on mutual instances and discusses three key problems in the method: how to deal with mistakes in the mutual instances and how to deal with too many or too few mutual instances. It provides the checking based on the weighted mutual instances considering fault tolerance, gives a way to partition the large-scale mutual instances, and proposes a process greatly reducing the manual annotation work to get more mutual instances. Intension annotation that improves the checking method is also discussed. The method is practical and effective to check subsumption relations between concept queries in different ontologies based on mutual instances.
KANG Da-zhouLU Jian-jiangXU Bao-wenWANG PengZHOU Jin
Fuzzy ontologics are efficient tools to handle fuzzy and uncertain knowledge on the semantic web; but there are heterogeneity problems when gaining interoperability among different fuzzy ontologies. This paper uses concept approximation between fuzzy ontologies based on instances to solve the heterogeneity problems. It firstly proposes an instance selection technology based on instance clustering and weighting to unify the fuzzy interpretation of different ontologies and reduce the number of instances to increase the efficiency. Then the paper resolves the problem of computing the approximations of concepts into the problem of computing the least upper approximations of atom concepts. It optimizes the search strategies by extending atom concept sets and defining the least upper bounds of concepts to reduce the searching space of the problem. An efficient algorithm for searching the least upper bounds of concept is given.
LI Yan-huiXU Bao-wenLU Jian-jiangKANG Da-zhouZHOU Jing-jing
In order to enable clustering to be done under a lower dimension, a new feature selection method for clustering is proposed. This method has three steps which are all carried out in a wrapper framework. First, all the original features are ranked according to their importance. An evaluation function E(f) used to evaluate the importance of a feature is introduced. Secondly, the set of important features is selected sequentially. Finally, the possible redundant features are removed from the important feature subset. Because the features are selected sequentially, it is not necessary to search through the large feature subset space, thus the efficiency can be improved. Experimental results show that the set of important features for clustering can be found and those unimportant features or features that may hinder the clustering task will be discarded by this method.
Some metamorphic relations (MR) are not good at detecting faults in metamorphic testing. In this paper, the method of making compositional MR (CMR) based on the speculative law of proposition logic is presented. This method constructs new MRs by composing existing MRs in a pairwise way. Because CMR contains all the advantages of the MRs that form it, its fault detection performance is wonderful. On the other hand, the number of relations will decrease greatly after composing, so a program can be tested with much fewer test cases when CMRs are used. In order to research the characteristics of a CMR, two case studies are analyzed. The experimental results show that the CMR's performance is mostly determined by the central MRs forming it and the sequence of composition. Testing efficiency is improved greatly when CMRs are used.
The emphasis of component system regression testing is retesting of the event interaction between updated components and other components in a system.A component system regression testing method based on a new component testing association model (CTAM) is proposed.First,the modification-affected component groups are identified by the impact analysis on CTAM,and each component in this group is assigned with an influence degree.Then,previous test cases are selected according to the influence degree,to generate the minimal regression test suite.Compared with traditional methods,CTAM is derived from the statistic on the interactive events that occurred in previous test executions,and focuses on the complicated relationship between components,which is more applicable to the component system regression testing.
Many high performance database servers are becoming idle with the tax data being integrated to country tax data centers. We make use of the idle servers to set up a provincial tax Grid based on open Grid service architecture (OGSA). We put forward practical methods to integrate databases, to define and create basic modular Grid services and apply agent to manage Grid services. This technical innovation scheme is of a service-oriented architecture (SOA) and succeeds in averting resources waste. Tests proved that it greatly improves the quality of tax services.
This paper proposes a method of data-flow testing for Web services composition. Firstly, to facilitate data flow analysis and constraints collecting, the existing model representation of business process execution language (BPEL) is modified in company with the analysis of data dependency and an exact representation of dead path elimination (DPE) is proposed, which over-comes the difficulties brought to dataflow analysis. Then defining and using information based on data flow rules is collected by parsing BPEL and Web services description language (WSDL) documents and the def-use annotated control flow graph is created. Based on this model, data-flow anomalies which indicate potential errors can be discovered by traversing the paths of graph, and all-du-paths used in dynamic data flow testing for Web services composition are automatically generated, then testers can design the test cases according to the collected constraints for each path selected.