metapath2vec: Scalable Representation Learning for Heterogeneous Networks
We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.
- Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. “metapath2vec: Scalable Representation Learning for Heterogeneous Networks.” ACM SIGKDD Conference On Knowledge Discovery and Data Mining (KDD),:135–144, 2017.
WEKA for Imbalanced Data (SMOTE and HDDT)
We’ve modified WEKA (v3.7.14), the popular Java-based data mining software package, to include two methods developed specifically for learning from imbalanced data: Synthetic Minority Oversampling TEchnique (SMOTE), a popular sampling method for data-preprocessing, and Hellinger Distance Decision Tree (HDDT), a skew-insensitive decision tree-based algorithm for classification. In the provided WEKA implementation, SMOTE can be found as a supervised instance filter, while HDDT can be found as a tree-based classifier. The SMOTE filter implementation for WEKA may also be downloaded separately here.
For more details on these methods, please consult the following publications:
- Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and Philip Kegelmeyer. “SMOTE: Synthetic Minority Over-Sampling Technique.” Journal of Artificial Intelligence Research (JAIR), 16(1):321–357, 2002.
- David A. Cieslak, T. Ryan Hoens, Nitesh V. Chawla, and W. Philip Kegelmeyer. “Hellinger Distance Decision Trees are Robust and Skew-Insensitive.” Data Mining and Knowledge Discovery (DMKD), 24(1):136–158, 2012.
Model Monitor (M2)
Model Monitor is a Java toolkit for the systematic evaluation of classifiers under changes in distribution. It provides methods for detecting distribution shifts in data, comparing the performance of multiple classifiers under shifts in distribution, and evaluating the robustness of individual classifiers to distribution change. As such, it allows users to determine the best model (or models) for their data under a number of potential scenarios. Additionally, Model Monitor is fully integrated with the WEKA machine learning environment, so that a variety of commodity classifiers can be used if desired.
Techniques implemented in this package come primarily from the following sources:
- David A. Cieslak and Nitesh V. Chawla. “Detecting Fractures in Classifier Performance.” Proceedings of the 7th IEEE International Conference on Data Mining (ICDM), pp. 123–132, 2007.
- David A. Cieslak and Nitesh V. Chawla. “A Framework for Monitoring Classifiers’ Performance: When and Why Failure Occurs?.” Knowledge and Information Systems (KAIS), 18(1):83–108, 2009.
- Troy Raeder and Nitesh V. Chawla. “Model Monitor (M2): Evaluating, Comparing, and Monitoring Models.” Journal of Machine Learning Research (JMLR), 10(Jul):1387–1390, 2009.
Perl/C SMOTE+Undersampling Wrapper Implementation
Learning from imbalanced data sets presents a convoluted problem both from the modeling and cost standpoints. In particular, when a class is of great interest but occurs relatively rarely such as in cases of fraud, instances of disease, and regions of interest in large-scale simulations, there is a corresponding high cost for misclassification of rare events. Under such circumstances, generating models with high minority class accuracy and with lower total misclassification cost is necessary. It becomes important to apply resampling and/or cost-based reweighting to improve the prediction of the minority class. However, the question remains on how to effectively apply the sampling strategy. To that end, we provide a wrapper paradigm that discovers the amount of re-sampling for a dataset. This method has produced favorable results compared to other imbalance methods and several cost-sensitive learning methods, such as MetaCost. With it, we have obtained the lowest cost per test example compared to any result we are aware of for the KDD Cup 1999 intrusion detection dataset.
For more details on the wrapper method, please consult the following publication:
- Nitesh V. Chawla, David A. Cieslak, Lawrence O. Hall, and Ajay Joshi. “Automatically Countering Imbalance and Its Empirical Relationship to Cost.” Data Mining and Knowledge Discovery (DMKD), 17(2):225–252, 2008.
Condor Grid Analysis Software Package (GASP)
Whether you are a first time Condor user or an advanced system administrator, job failure on the grid is inevitible. In a submission batch of 1,000 jobs, one might observe 500 job failures, leaving the user with several questions: Why are some jobs evicted multiple times? Why do some jobs create Shadow Exceptions? Is a group of machines incapable of running a particular submission? All of these are difficult to answer due to the scale of the machine pool and jobs submitted. Failure may appear to occur at random, but often there is a pattern and the Condor Grid Analysis Software Package (GASP) is the tool to help you find it.
- David A. Cieslak, Nitesh V. Chawla, and Douglas L. Thain. “Troubleshooting Thousands of Jobs on Production Grids Using Data Mining Techniques.” Proceedings of the 9th IEEE/ACM International Conference on Grid Computing (GRID), pp. 217–224, 2008.