zoukankan      html  css  js  c++  java
  • 图学习*速递[2021/10/7]

    Graph相关(图学习|图神经网络|图优化等)(6篇)

    [1] Space-Time Graph Neural Networks
    标题:时空图神经网络
    链接:https://arxiv.org/abs/2110.02880

    作者:Samar Hadou,Charilaos I. Kanatsoulis,Alejandro Ribeiro
    机构:The authors are with the Department of Electrical and Systems Engineering,  University of Pennsylvania
    摘要:我们介绍了时空图神经网络(ST-GNN),一种新的GNN结构,专门用于联合处理时变网络数据的潜在时空拓扑。我们提出的架构的基石是时间和图卷积滤波器的组合,然后是逐点非线性激活函数。我们介绍了卷积算子的一般定义,它模拟信号在其底层支持上的扩散过程。在这个定义的基础上,我们提出了时空图卷积,它建立在时间和图移位算子的组合之上。我们证明了具有多元积分Lipschitz滤波器的ST-GNNs对于基础图中的小扰动以及时间扭曲引起的时域小扰动是稳定的。我们的分析表明,网络拓扑和系统时间演化的微小变化不会显著影响ST GNNs的性能。分散控制系统的数值实验表明了所提出的ST GNNs的有效性和稳定性。
    摘要:We introduce space-time graph neural network (ST-GNN), a novel GNN architecture, tailored to jointly process the underlying space-time topology of time-varying network data. The cornerstone of our proposed architecture is the composition of time and graph convolutional filters followed by pointwise nonlinear activation functions. We introduce a generic definition of convolution operators that mimic the diffusion process of signals over its underlying support. On top of this definition, we propose space-time graph convolutions that are built upon a composition of time and graph shift operators. We prove that ST-GNNs with multivariate integral Lipschitz filters are stable to small perturbations in the underlying graphs as well as small perturbations in the time domain caused by time warping. Our analysis shows that small variations in the network topology and time evolution of a system does not significantly affect the performance of ST-GNNs. Numerical experiments with decentralized control systems showcase the effectiveness and stability of the proposed ST-GNNs.

     

    [2] Semi-relaxed Gromov Wasserstein divergence with applications on graphs
    标题:半松弛Gromov Wasserstein散度及其在图上的应用
    链接:https://arxiv.org/abs/2110.02753

    作者:Cédric Vincent-Cuaz,Rémi Flamary,Marco Corneli,Titouan Vayer,Nicolas Courty
    机构:Universit´e Cˆote d’Azur, Inria, Maasai, CNRS, LJAD, MSI, Nice, France, Universit´e Lyon, Inria, CNRS, ENS de Lyon, UCB Lyon , LIP UMR , Lyon, France, Universit´e Bretagne-Sud, CNRS, IRISA, Vannes, France
    备注:preprint under review
    摘要:比较结构化对象(如图形)是许多学习任务中涉及的基本操作。为此,基于最佳传输(OT)的格罗莫夫-瓦瑟斯坦(GW)距离已被证明能够成功地处理相关对象的特定性质。更具体地说,通过节点连接关系,GW在图上运行,被视为特定空间上的概率度量。OT的核心是质量守恒的思想,它在两个考虑的图的所有节点之间施加耦合。在本文中,我们认为这种性质对诸如图字典或划分学习之类的任务是有害的,我们通过提出一种新的半松弛Gromov-Wasserstein散度来放松它。除了直接的计算优势外,我们还讨论了它的性质,并表明它可以导致一种高效的图字典学习算法。我们通过经验证明了它与图上的复杂任务(如划分、聚类和完成)的相关性。
    摘要:Comparing structured objects such as graphs is a fundamental operation involved in many learning tasks. To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects. More specifically, through the nodes connectivity relations, GW operates on graphs, seen as probability measures over specific spaces. At the core of OT is the idea of conservation of mass, which imposes a coupling between all the nodes from the two considered graphs. We argue in this paper that this property can be detrimental for tasks such as graph dictionary or partition learning, and we relax it by proposing a new semi-relaxed Gromov-Wasserstein divergence. Aside from immediate computational benefits, we discuss its properties, and show that it can lead to an efficient graph dictionary learning algorithm. We empirically demonstrate its relevance for complex tasks on graphs such as partitioning, clustering and completion.

     

    [3] An Analysis of Attentive Walk-Aggregating Graph Neural Networks
    标题:注意力步行聚集图神经网络的分析
    链接:https://arxiv.org/abs/2110.02667

    作者:Mehmet F. Demirel,Shengchao Liu,Siddhant Garg,Yingyu Liang
    机构:University of Wisconsin–Madison,Quebec AI Institute (Mila),Amazon Alexa AI
    备注:Preprint (36 Pages)

    摘要:图形神经网络(GNNs)具有很强的表示能力,可用于图形结构数据(如分子和社会网络)的下游预测任务。它们通常通过聚集来自单个顶点的K-hop邻域或图中的枚举行走的信息来学习表示。先前的研究已经证明了将加权方案纳入GNN的有效性;然而,到目前为止,这主要局限于K-hop社区GNN。在本文中,我们的目的是广泛分析将加权方案纳入步行聚合GNN的效果。为了实现这一目标,我们提出了一种新的GNN模型,称为AWARE,该模型以一种原则性的方式使用注意方案来聚合关于图中行走的信息,从而获得用于图级预测任务的端到端监督学习方法。我们对意识进行理论、实证和解释性分析。我们的理论分析为加权GNN提供了第一个可证明的保证,展示了图形信息如何编码到表示中,以及AWARE中的加权方案如何影响表示和学习性能。我们在分子性质预测(61项任务)和社会网络(4项任务)领域的经验证明了AWARE优于先前基线。我们的解释研究表明,AWARE可以成功地学习捕获输入图的重要子结构。摘要:Graph neural networks (GNNs) have been shown to possess strong representation power, which can be exploited for downstream prediction tasks on graph-structured data, such as molecules and social networks. They typically learn representations by aggregating information from the K-hop neighborhood of individual vertices or from the enumerated walks in the graph. Prior studies have demonstrated the effectiveness of incorporating weighting schemes into GNNs; however, this has been primarily limited to K-hop neighborhood GNNs so far. In this paper, we aim to extensively analyze the effect of incorporating weighting schemes into walk-aggregating GNNs. Towards this objective, we propose a novel GNN model, called AWARE, that aggregates information about the walks in the graph using attention schemes in a principled way to obtain an end-to-end supervised learning method for graph-level prediction tasks. We perform theoretical, empirical, and interpretability analyses of AWARE. Our theoretical analysis provides the first provable guarantees for weighted GNNs, demonstrating how the graph information is encoded in the representation, and how the weighting schemes in AWARE affect the representation and learning performance. We empirically demonstrate the superiority of AWARE over prior baselines in the domains of molecular property prediction (61 tasks) and social networks (4 tasks). Our interpretation study illustrates that AWARE can successfully learn to capture the important substructures of the input graph.

    [4] Inference Attacks Against Graph Neural Networks
    标题:针对图神经网络的推理攻击
    链接:https://arxiv.org/abs/2110.02631

    作者:Zhikun Zhang,Min Chen,Michael Backes,Yun Shen,Yang Zhang
    机构:CISPA Helmholtz Center for Information Security, Norton Research Group
    备注:19 pages, 18 figures. To Appear in the 31st USENIX Security Symposium

    摘要:图是现实世界中普遍存在的一种重要的数据表示形式。然而,由于图形数据的非欧几里德性质,分析图形数据在计算上很困难。图嵌入是将图数据转化为低维向量来解决图分析问题的有力工具。这些向量还可以与第三方共享,以获得关于数据背后的更多见解。虽然共享图嵌入很有趣,但相关的隐私风险尚未被探究。本文通过三种推理攻击,系统地研究了图嵌入的信息泄漏问题。首先,我们可以成功地推断目标图的基本图属性,如节点数、边数和图密度,精确度高达0.89。其次,给定感兴趣的子图和图嵌入,我们可以高置信度地确定子图是否包含在目标图中。例如,我们在DD数据集上实现了0.98攻击AUC。第三,我们提出了一种新的图重建攻击,该攻击可以重建与目标图具有相似图结构统计信息的图。我们进一步提出了一种基于图嵌入扰动的有效防御机制,以在不显著降低图分类任务性能的情况下缓解推理攻击。我们的代码可在https://github.com/Zhangzhk0819/GNN-Embedding-Leaks.摘要:Graph is an important data representation ubiquitously existing in the real world. However, analyzing the graph data is computationally difficult due to its non-Euclidean nature. Graph embedding is a powerful tool to solve the graph analytics problem by transforming the graph data into low-dimensional vectors. These vectors could also be shared with third parties to gain additional insights of what is behind the data. While sharing graph embedding is intriguing, the associated privacy risks are unexplored. In this paper, we systematically investigate the information leakage of the graph embedding by mounting three inference attacks. First, we can successfully infer basic graph properties, such as the number of nodes, the number of edges, and graph density, of the target graph with up to 0.89 accuracy. Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph. For instance, we achieve 0.98 attack AUC on the DD dataset. Third, we propose a novel graph reconstruction attack that can reconstruct a graph that has similar graph structural statistics to the target graph. We further propose an effective defense mechanism based on graph embedding perturbation to mitigate the inference attacks without noticeable performance degradation for graph classification tasks. Our code is available at https://github.com/Zhangzhk0819/GNN-Embedding-Leaks.

     

    [5] A Regularized Wasserstein Framework for Graph Kernels
    标题:图核的正则化Wasserstein框架
    链接:https://arxiv.org/abs/2110.02554

    作者:Asiri Wijesinghe,Qing Wang,Stephen Gould
    机构:School of Computing, Australian National University, Canberra, Australia
    备注:21st IEEE International Conference on Data Mining (ICDM 2021)

    摘要:我们提出了一个基于正则化最优传输的图核学习框架。该框架提供了一种新的最优传输距离度量,即正则化的Wasserstein(RW)差异,它通过特征上的Wasserstein距离及其局部变化、局部重心和全局连通性来保持图的特征和结构。为了提高学习能力,引入了两个强凸正则化项。一种是将图之间的最佳对齐放松为其局部连通顶点之间的簇到簇映射,从而保持图的局部聚类结构。另一种是考虑节点度分布,以便更好地保持图的全局结构。我们还设计了一个有效的算法,使快速近似求解优化问题。理论上,我们的框架是健壮的,能够保证优化的收敛性和数值稳定性。我们使用12个数据集和16个最先进的基线对我们的方法进行了实证验证。实验结果表明,对于具有离散属性的图和具有连续属性的图,我们的方法在所有基准数据库上都优于所有最新的方法。摘要:We propose a learning framework for graph kernels, which is theoretically grounded on regularizing optimal transport. This framework provides a novel optimal transport distance metric, namely Regularized Wasserstein (RW) discrepancy, which can preserve both features and structure of graphs via Wasserstein distances on features and their local variations, local barycenters and global connectivity. Two strongly convex regularization terms are introduced to improve the learning ability. One is to relax an optimal alignment between graphs to be a cluster-to-cluster mapping between their locally connected vertices, thereby preserving the local clustering structure of graphs. The other is to take into account node degree distributions in order to better preserve the global structure of graphs. We also design an efficient algorithm to enable a fast approximation for solving the optimization problem. Theoretically, our framework is robust and can guarantee the convergence and numerical stability in optimization. We have empirically validated our method using 12 datasets against 16 state-of-the-art baselines. The experimental results show that our method consistently outperforms all state-of-the-art methods on all benchmark databases for both graphs with discrete attributes and graphs with continuous attributes.

    [6] A Topological View of Rule Learning in Knowledge Graphs
    标题:知识图中规则学习的拓扑观
    链接:https://arxiv.org/abs/2110.02510

    作者:Zuoyu Yan,Tengfei Ma,Liangcai Gao,Zhi Tang,Chao Chen
    机构:Wangxuan Institute of Computer Technology, Peking University, IBM Thomas J. Watson Research Center, Department of Biomedical Informatics, Stony Brook University

    摘要:归纳关系预测是知识图完成的一项重要学习任务。人们可以利用规则的存在,即一系列关系来预测两个实体之间的关系。以前的工作将规则视为路径,主要关注实体之间的路径搜索。路径空间巨大,必须牺牲效率或准确性。本文以知识图中的规则为周期,证明了基于代数拓扑理论的循环空间具有独特的结构。通过探索循环空间的线性结构,可以提高规则的搜索效率。我们建议收集跨越循环空间的循环基。我们在收集的循环上构建了一个新的GNN框架来学习循环的表示,并预测关系的存在/不存在。我们的方法在基准上实现了最先进的性能。
    摘要:Inductive relation prediction is an important learning task for knowledge graph completion. One can use the existence of rules, namely a sequence of relations, to predict the relation between two entities. Previous works view rules as paths and primarily focus on the searching of paths between entities. The space of paths is huge, and one has to sacrifice either efficiency or accuracy. In this paper, we consider rules in knowledge graphs as cycles and show that the space of cycles has a unique structure based on the theory of algebraic topology. By exploring the linear structure of the cycle space, we can improve the searching efficiency of rules. We propose to collect cycle bases that span the space of cycles. We build a novel GNN framework on the collected cycles to learn the representations of cycles, and to predict the existence/non-existence of a relation. Our method achieves state-of-the-art performance on benchmarks.

    因上求缘,果上努力~~~~ 作者:每天卷学习,转载请注明原文链接:https://www.cnblogs.com/BlairGrowing/p/15375304.html

  • 相关阅读:
    普通文本输入数学符号的方式
    Chrome crx离线插件下载及安装
    Solidworks常见问题一览
    数学学术资源站点(zz)
    最难读的20个英文单词
    运用html5 canvas做飞机大战游戏(2)
    html
    运用html5 canvas做飞机大战游戏(1)
    js
    css
  • 原文地址:https://www.cnblogs.com/BlairGrowing/p/15375304.html
Copyright © 2011-2022 走看看