zoukankan      html  css  js  c++  java
  • 论文阅读笔记|Detect Rumors on Twitter by Promoting Information Campaigns with Generative Adversarial Learning

    1.Abstract

    In this paper,the author we attempt to fight such chaos(FAKE NEWS) with itself to make automatic rumor detection More robust and effective;

    The idea is inspired by adversarial learning method originated from Generative Adversarial Networks.

    In the approach,a generator is designed to produce uncertain or conflicting voices,complicating the original conventional threads in order to pressurize the discriminator to learn stronger rumor indicative representations from augmented,more challenging examples.

    Different from traditional data-driven approach to rumor detection,the method can capture low-frequency but stronger non-trivial patterns via such adversarial training.

    2.Existing Method

    RNNCNN

    Nevertheless, existing data-driven approaches typically rely on finding indicative responses such as skeptical and disagreeing opinions for detection.

    Our seemingly counter-intuitive idea is inspired by the Generative Adversarial Networks or dubbed as GAN [6, 7], where a discriminative classifier learns to distinguish whether an instance is from real world, and a generative model is trained to confuse the discriminator by generating proximately realistic examples.

    Why can such a GAN-style method do better in feature learning?

    The main contributions of our paper are four-fold:

    1. the first generative approach for rumor detection using a text-based GAN-style framework
    2. model rumor dissemination as generative information campaigns for generating confusing training examples to challenge the discriminator of its detection capacity.
    3. we reinforce our discriminator
    4. our model is more robust and effective than state-of-the-art baselines

    3.Problem Statement

    a claim is commonly represented by a set of posts (i.e., tweets) relevant to the claim which can be collected via Twitter's search function.

    GENERATIVE ADVERSARIAL LEARNING FOR RUMOR DETECTION

    3.1Controversial Example Generation

    A straightforward way is to twist or complicate the opinions expressed in the original data examples via a handful of rule templates.

    We design two generators, one for distorting non-rumor to make it look like a rumor, and the other for 'whitewashing' rumor so that it looks like a non-rumor.

    Considering the time sequence structure of posts in each instance,we use a sequence-to-sequence model for the generative transformation, which is illustrated in Figure.

    We use GRU to store hidden representation:

    The output of the last time step hT from the GRU-RNN encoder will be the hidden representation of Xy. Note that the sequence length T is not fixed which can vary with different instances

    3.2GAN-Style Adversarial Learning Model

    We use the performance of discriminator as a reward to guide the generator.It consists of an adversarial learning module and two reconstruction modules (one for rumor and the other for non-rumor).

    Adversarial learning module:

    We formulate adversarial loss as the negative of discriminator loss based on the generator-augmented training data.

    where LD(·) is the loss between the ground-truth class probability distribution y¯ and the class distribution yˆ predicted by discriminator given an input instance.

    We combine the generated examples and original ones to augment the training set.we do not want to seriously weaken those useful features in the original example. Thus, the original example Xy is combined with X˜y for training as shown in Figure 3.

    Reconstruction module:

    We introduce a reconstruction mechanism to make the generative process reversible. The idea is that the opinionated voices will be reversible through two generators of opposite direction so as to minimize the loss of fidelity of information.

    Objective of optimization:

    And the objective of adversarial learning takes a min-max form.

    3.3Rumor Discriminator

    With the combined data set,the discriminator learns to capyure more discriminative features,especially from low-frequency non-trivial patterns.

    We build the discriminator based on a RNN rumor detection model.

    The generators and discriminator are alternately trained using stochastic gradient decent with mini-batches.

    4.EXPERIMENTS AND RESULTS

    Early Rumor Detection

    5.CONCLUSION AND FUTURE WORK

    Our neural-network-based generators create training examples to confuse rumor discriminator so that the discriminator are forced to learn more powerful features from the augmented training data.

  • 相关阅读:
    Enable Zombie
    python中调用c文件中的函数(ubuntu)
    NSNotificationCenter使用心得(原)
    TCP/UDP
    xcconfig 文件使用( 转 )
    TS流解析 (转)
    c 数字和char类型转换
    结构体如何使用NSData包装
    NSValue 保存定长的结构体
    遍历DOM的所有节点,输出宽度高度都大于50的元素节点名称
  • 原文地址:https://www.cnblogs.com/billdingdj/p/11620617.html
Copyright © 2011-2022 走看看