**Invited Speakers**

Veronica Vinciotti, Brunel University London -

*Identifying overlapping terrorist cells from the Noordin Top actor-event network*

Actor-event data are common in sociological settings, whereby one registers the pattern of attendance of a group of social actors to a number of events. We focus on 79 members of the Noordin Top terrorist network, who were monitored attending 45 events. The attendance or non-attendance of the terrorist to events defines the social fabric, such as group coherence and social communities. The aim of the analysis of such data is to learn about this social structure. Actor-event data is often transformed to actor-actor data in order to be further analysed by network models, such as stochastic block models. This transformation and such analyses lead to a natural loss of information, particularly when one is interested in identifying, possibly overlapping, subgroups or communities of actors on the basis of their attendances to events. In this talk we propose an actor-event model for overlapping communities of terrorists, which simplifies interpretation of the network. We propose a mixture model with overlapping clusters for the analysis of the binary actor-event network data and develop a Bayesian procedure for inference. After a simulation study, we show how this analysis of the terrorist network has clear interpretative advantages over the more traditional approaches of network analysis.

Leman Akoglu, Carnegie Mellon University -

*A Quest for Structure: Joint Graph Structure & Semi-Supervised Inference*

Semi-supervised learning (SSL) is effectively used for numerous classification problems, thanks to its ability to make use of abundant unlabeled data. The main assumption of various SSL algorithms is that the nearby points on the data manifold are likely to share a label. Graph-based SSL constructs a graph from point-cloud data as an approximation to the underlying manifold, followed by label inference. It is no surprise that the quality of the constructed graph in capturing the essential structure of the data is critical to the accuracy of the subsequent inference step.

How should one construct a graph from the input point-cloud data for graph-based SSL? In this talk I will introduce a Parallel Graph Learning framework (called PG-Learn) for the graph construction step of SSL. The solution has two main ingredients: (1) a gradient-based optimization of the edge weights (specifically, different kernel bandwidths in each dimension) based on a validation loss objective, and (2) a parallel hyper-parameter search algorithm with an adaptive resource allocation scheme. The latter early-terminates searches based on relative performance, in order to dynamically allocate resources (time and processors) to those with promising configurations. Put together, PG-Learn is a hybrid of iterative local and parallel random search that strategically navigates the hyper-parameter space. What is more, PG-learn is scalable in dimensionality and number of samples both in terms of runtime and memory requirements.

How should one construct a graph from the input point-cloud data for graph-based SSL? In this talk I will introduce a Parallel Graph Learning framework (called PG-Learn) for the graph construction step of SSL. The solution has two main ingredients: (1) a gradient-based optimization of the edge weights (specifically, different kernel bandwidths in each dimension) based on a validation loss objective, and (2) a parallel hyper-parameter search algorithm with an adaptive resource allocation scheme. The latter early-terminates searches based on relative performance, in order to dynamically allocate resources (time and processors) to those with promising configurations. Put together, PG-Learn is a hybrid of iterative local and parallel random search that strategically navigates the hyper-parameter space. What is more, PG-learn is scalable in dimensionality and number of samples both in terms of runtime and memory requirements.

Martin Rosvall, Umea University -

*Inference beyond network models*

Flows of ideas, information, money, people, or goods characteristically persist in modules that can reveal system functions. Conventional approaches to identify such modules in network flows ignore that the flow direction often depends on more than a single step, that is, from where the flows come. We derive modules using relevant information in paths by combining data and network clustering with cross-validation for model selection. We efficiently describe paths with Sparse Markov chains and reveal modular organization with maps based on optimal compression. For example, we identify nested, overlapping modules of citation flows between 4.5 million articles in all areas of science that compared to existing classifications almost double the time flows persist in modules, and thereby more accurately reveal the modular organization of science. We visualize the organization with a new network navigator.

Dean Eckles, MIT -

*Randomization inference for spillovers in networks*

Social, behavioral, and network scientists are interested in testing of hypotheses about spillovers (i.e. interference, exogenous peer effects) in social networks; and similar questions may arise in other settings (e.g., biological and computer networks). However, when there is a single network, this is complicated by lack of independent observations. This talk explores Fisherian randomization inference as an approach to exact finite-population inference. One problem that arises is that the relevant hypotheses (e.g., no spillovers) are non-sharp null hypotheses. Fisherian randomization inference can be used to test these hypotheses either by (a) making the hypotheses sharp by assuming a model for direct effects or (b) conducting conditional randomization inference such that the hypotheses are sharp. I present both of these approaches, the latter of which is developed in Aronow (2012) and our paper (Athey, Eckles & Imbens, 2017). This usually involves selecting some vertices to be "focal" and conditioning on their treatment assignment and/or the assignment of some of all of their network neighbors. The selection of this set can present interesting algorithmic questions; we, for example, make use of greedy methods for finding maximal independent sets. I illustrate these methods with application to a large voter turnout experiment on Facebook (Jones et al., 2017).