Code for Generating Higher-Order Network (HON) Representations
Many complex systems can show intricate dependencies, challenging attempts at network analysis. To address this challenge, we’ve proposed a higher-order network (HON) representation that can discover and embed variable orders of dependencies in a network representation, with desirable effects on accuracy, scalability, and direct compatibility with the existing suite of network analysis methods. The project website linked below demonstrates how existing network algorithms including clustering, ranking, and anomaly detection can be directly used on HON without modification, and influence observations in interdisciplinary applications such as modeling global shipping and web user browsing behavior. Video demo, source code in Python and testing data are also available.
Citation:
- Jian Xu, Thanuka L. Wickramarathne, and Nitesh V. Chawla. “Representing Higher-Order Dependencies in Networks.” Science Advances 2(5):e1600028, 2016.
Links: Project website | GitHub
LPmade: Link Prediction Made Easy
LPmade is a complete cross-platform software solution for multicore link prediction and related tasks and analysis. Its first principal contributions are a scalable network library supporting high-performance implementations of the most commonly employed unsupervised link prediction methods. Link prediction in longitudinal data requires a sophisticated and disciplined process for correct results and fair evaluation, so the second principle contribution of LPmade is a sophisticated GNU make script that completely automates link prediction, prediction evaluation, and network analysis. Finally, LPmade streamlines and automates the process of creating multivariate supervised link prediction models as proposed by Lichtenwalter and Chawla in 2010 with WEKA (v3.5.8) modified to operate effectively on extremely large data sets. With mere minutes of manual work, one may start with a raw stream of records representing a network and progress through hundreds of steps to complete plots, gigabytes or terabytes of output, and actionable or publishable results.
For more details on the supervised link prediction methods implemented, please consult the following publications:
Citations:
- Ryan N. Lichtenwalter, Jake T. Lussier, and Nitesh V. Chawla. “New Perspectives and Methods in Link Prediction.” Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 243–252, 2010.
- Ryan N. Lichtenwalter and Nitesh V. Chawla. “LPmade: Link Prediction Made Easy.” Journal of Machine Learning Research (JMLR), 12(Aug):2489–2492, 2011.
DisNet: A Framework for Distributed Graph Computation
DisNet is a framework for distributed computation in large networks. This C++ implementation is targeted for high-efficiency distributed computation. To use DisNet, the user needs only to supply two small fragments of code describing the fundamental kernel of the computation. The framework automatically divides and distributes the workload and manages completion using an arbitrary number of heterogeneous computational resources. In practice, we have used thousands of machines and observed commensurate speedups.
Citation:
- Ryan N. Lichtenwalter and Nitesh V. Chawla. “DisNet: A Framework for Distributed Graph Computation.” Proceedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 264–270, 2011.