zoukankan      html  css  js  c++  java
  • epoll内核源码详解(转 作者:赛罗·奥特曼 来源:牛客网)

           发现自己发的一篇面经后,很多小伙伴向我索要epoll的内核源码实现,那我就在牛客网发下这源码还有自己总结的流程.

    另外 网上很多博客说epoll使用了共享内存,这个是完全错误的 ,可以阅读源码,会发现完全没有使用共享内存的任何api,
    而是 使用了copy_from_user跟__put_user进行内核跟用户虚拟空间数据交互.

     
      1  *  fs/eventpoll.c (Efficient event retrieval implementation)
      2  *  Copyright (C) 2001,...,2009  Davide Libenzi
      3  *
      4  *  This program is free software; you can redistribute it and/or modify
      5  *  it under the terms of the GNU General Public License as published by
      6  *  the Free Software Foundation; either version 2 of the License, or
      7  *  (at your option) any later version.
      8  *
      9  *  Davide Libenzi <davidel@xmailserver.org>
     10  *
     11  */
     12 /*
     13  * 在深入了解epoll的实现之前, 先来了解内核的3个方面.
     14  * 1. 等待队列 waitqueue
     15  * 我们简单解释一下等待队列:
     16  * 队列头(wait_queue_head_t)往往是资源生产者,
     17  * 队列成员(wait_queue_t)往往是资源消费者,
     18  * 当头的资源ready后, 会逐个执行每个成员指定的回调函数,
     19  * 来通知它们资源已经ready了, 等待队列大致就这个意思.
     20  * 2. 内核的poll机制
     21  * 被Poll的fd, 必须在实现上支持内核的Poll技术,
     22  * 比如fd是某个字符设备,或者是个socket, 它必须实现
     23  * file_operations中的poll操作, 给自己分配有一个等待队列头.
     24  * 主动poll fd的某个进程必须分配一个等待队列成员, 添加到
     25  * fd的对待队列里面去, 并指定资源ready时的回调函数.
     26  * 用socket做例子, 它必须有实现一个poll操作, 这个Poll是
     27  * 发起轮询的代码必须主动调用的, 该函数中必须调用poll_wait(),
     28  * poll_wait会将发起者作为等待队列成员加入到socket的等待队列中去.
     29  * 这样socket发生状态变化时可以通过队列头逐个通知所有关心它的进程.
     30  * 这一点必须很清楚的理解, 否则会想不明白epoll是如何
     31  * 得知fd的状态发生变化的.
     32  * 3. epollfd本身也是个fd, 所以它本身也可以被epoll,
     33  * 可以猜测一下它是不是可以无限嵌套epoll下去...
     34  *
     35  * epoll基本上就是使用了上面的1,2点来完成.
     36  * 可见epoll本身并没有给内核引入什么特别复杂或者高深的技术,
     37  * 只不过是已有功能的重新组合, 达到了超过select的效果.
     38  */
     39 /*
     40  * 相关的其它内核知识:
     41  * 1. fd我们知道是文件描述符, 在内核态, 与之对应的是struct file结构,
     42  * 可以看作是内核态的文件描述符.
     43  * 2. spinlock, 自旋锁, 必须要非常小心使用的锁,
     44  * 尤其是调用spin_lock_irqsave()的时候, 中断关闭, 不会发生进程调度,
     45  * 被保护的资源其它CPU也无法访问. 这个锁是很强力的, 所以只能锁一些
     46  * 非常轻量级的操作.
     47  * 3. 引用计数在内核中是非常重要的概念,
     48  * 内核代码里面经常有些release, free释放资源的函数几乎不加任何锁,
     49  * 这是因为这些函数往往是在对象的引用计数变成0时被调用,
     50  * 既然没有进程在使用在这些对象, 自然也不需要加锁.
     51  * struct file 是持有引用计数的.
     52  */
     53 /* --- epoll相关的数据结构 --- */
     54 /*
     55  * This structure is stored inside the "private_data" member of the file
     56  * structure and rapresent the main data sructure for the eventpoll
     57  * interface.
     58  */
     59 /* 每创建一个epollfd, 内核就会分配一个eventpoll与之对应, 可以说是
     60  * 内核态的epollfd. */
     61 struct eventpoll {
     62     /* Protect the this structure access */
     63     spinlock_t lock;
     64     /*
     65      * This mutex is used to ensure that files are not removed
     66      * while epoll is using them. This is held during the event
     67      * collection loop, the file cleanup path, the epoll file exit
     68      * code and the ctl operations.
     69      */
     70     /* 添加, 修改或者删除监听fd的时候, 以及epoll_wait返回, 向用户空间
     71      * 传递数据时都会持有这个互斥锁, 所以在用户空间可以放心的在多个线程
     72      * 中同时执行epoll相关的操作, 内核级已经做了保护. */
     73     struct mutex mtx;
     74     /* Wait queue used by sys_epoll_wait() */
     75     /* 调用epoll_wait()时, 我们就是"睡"在了这个等待队列上... */
     76     wait_queue_head_t wq;
     77     /* Wait queue used by file->poll() */
     78     /* 这个用于epollfd本事被poll的时候... */
     79     wait_queue_head_t poll_wait;
     80     /* List of ready file descriptors */
     81     /* 所有已经ready的epitem都在这个链表里面 */
     82     struct list_head rdllist;
     83     /* RB tree root used to store monitored fd structs */
     84     /* 所有要监听的epitem都在这里 */
     85     struct rb_root rbr;
     86     /*
     87         这是一个单链表链接着所有的struct epitem当event转移到用户空间时
     88      */
     89      * This is a single linked list that chains all the "struct epitem" that
     90      * happened while transfering ready events to userspace w/out
     91      * holding ->lock.
     92      */
     93     struct epitem *ovflist;
     94     /* The user that created the eventpoll descriptor */
     95     /* 这里保存了一些用户变量, 比如fd监听数量的最大值等等 */
     96     struct user_struct *user;
     97 };
     98 /*
     99  * Each file descriptor added to the eventpoll interface will
    100  * have an entry of this type linked to the "rbr" RB tree.
    101  */
    102 /* epitem 表示一个被监听的fd */
    103 struct epitem {
    104     /* RB tree node used to link this structure to the eventpoll RB tree */
    105     /* rb_node, 当使用epoll_ctl()将一批fds加入到某个epollfd时, 内核会分配
    106      * 一批的epitem与fds们对应, 而且它们以rb_tree的形式组织起来, tree的root
    107      * 保存在epollfd, 也就是struct eventpoll中.
    108      * 在这里使用rb_tree的原因我认为是提高查找,插入以及删除的速度.
    109      * rb_tree对以上3个操作都具有O(lgN)的时间复杂度 */
    110     struct rb_node rbn;
    111     /* List header used to link this structure to the eventpoll ready list */
    112     /* 链表节点, 所有已经ready的epitem都会被链到eventpoll的rdllist中 */
    113     struct list_head rdllink;
    114     /*
    115      * Works together "struct eventpoll"->ovflist in keeping the
    116      * single linked chain of items.
    117      */
    118     /* 这个在代码中再解释... */
    119     struct epitem *next;
    120     /* The file descriptor information this item refers to */
    121     /* epitem对应的fd和struct file */
    122     struct epoll_filefd ffd;
    123     /* Number of active wait queue attached to poll operations */
    124     int nwait;
    125     /* List containing poll wait queues */
    126     struct list_head pwqlist;
    127     /* The "container" of this item */
    128     /* 当前epitem属于哪个eventpoll */
    129     struct eventpoll *ep;
    130     /* List header used to link this item to the "struct file" items list */
    131     struct list_head fllink;
    132     /* The structure that describe the interested events and the source fd */
    133     /* 当前的epitem关系哪些events, 这个数据是调用epoll_ctl时从用户态传递过来 */
    134     struct epoll_event event;
    135 };
    136 struct epoll_filefd {
    137     struct file *file;
    138     int fd;
    139 };
    140 /* poll所用到的钩子Wait structure used by the poll hooks */
    141 struct eppoll_entry {
    142     /* List header used to link this structure to the "struct epitem" */
    143     struct list_head llink;
    144     /* The "base" pointer is set to the container "struct epitem" */
    145     struct epitem *base;
    146     /*
    147      * Wait queue item that will be linked to the target file wait
    148      * queue head.
    149      */
    150     wait_queue_t wait;
    151     /* The wait queue head that linked the "wait" wait queue item */
    152     wait_queue_head_t *whead;
    153 };
    154 /* Wrapper struct used by poll queueing */
    155 struct ep_pqueue {
    156     poll_table pt;
    157     struct epitem *epi;
    158 };
    159 /* Used by the ep_send_events() function as callback private data */
    160 struct ep_send_events_data {
    161     int maxevents;
    162     struct epoll_event __user *events;
    163 };
    164  
    165 /* --- 代码注释 --- */
    166 /* 你没看错, 这就是epoll_create()的真身, 基本啥也不干直接调用epoll_create1了,
    167  * 另外你也可以发现, size这个参数其实是没有任何用处的... */
    168 SYSCALL_DEFINE1(epoll_create, int, size)
    169 {
    170         if (size <= 0)
    171                 return -EINVAL;
    172         return sys_epoll_create1(0);
    173 }
    174 /* 这才是真正的epoll_create啊~~ */
    175 SYSCALL_DEFINE1(epoll_create1, int, flags)
    176 {
    177     int error;
    178     struct eventpoll *ep = NULL;//主描述符
    179     /* Check the EPOLL_* constant for consistency.  */
    180     /* 这句没啥用处... */
    181     BUILD_BUG_ON(EPOLL_CLOEXEC != O_CLOEXEC);
    182     /* 对于epoll来讲, 目前唯一有效的FLAG就是CLOEXEC */
    183     if (flags & ~EPOLL_CLOEXEC)
    184         return -EINVAL;
    185     /*
    186      * Create the internal data structure ("struct eventpoll").
    187      */
    188     /* 分配一个struct eventpoll, 分配和初始化细节我们随后深聊~ */
    189     error = ep_alloc(&ep);
    190     if (error < 0)
    191         return error;
    192     /*
    193      * Creates all the items needed to setup an eventpoll file. That is,
    194      * a file structure and a free file descriptor.
    195      */
    196     /* 这里是创建一个匿名fd, 说起来就话长了...长话短说:
    197      * epollfd本身并不存在一个真正的文件与之对应, 所以内核需要创建一个
    198      * "虚拟"的文件, 并为之分配真正的struct file结构, 而且有真正的fd.
    199      * 这里2个参数比较关键:
    200      * eventpoll_fops, fops就是file operations, 就是当你对这个文件(这里是虚拟的)进行操作(比如读)时,
    201      * fops里面的函数指针指向真正的操作实现, 类似C++里面虚函数和子类的概念.
    202      * epoll只实现了poll和release(就是close)操作, 其它文件系统操作都有VFS全权处理了.
    203      * ep, ep就是struct epollevent, 它会作为一个私有数据保存在struct file的private指针里面.
    204      * 其实说白了, 就是为了能通过fd找到struct file, 通过struct file能找到eventpoll结构.
    205      * 如果懂一点Linux下字符设备驱动开发, 这里应该是很好理解的,
    206      * 推荐阅读 <Linux device driver 3rd>
    207      */
    208     error = anon_inode_getfd("[eventpoll]", &eventpoll_fops, ep,
    209                  O_RDWR | (flags & O_CLOEXEC));
    210     if (error < 0)
    211         ep_free(ep);
    212     return error;
    213 }
    214 /*
    215 * 创建好epollfd后, 接下来我们要往里面添加fd咯
    216 * 来看epoll_ctl
    217 * epfd 就是epollfd
    218 * op ADD,MOD,DEL
    219 * fd 需要监听的描述符
    220 * event 我们关心的events
    221 */
    222 SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd,
    223         struct epoll_event __user *, event)
    224 {
    225     int error;
    226     struct file *file, *tfile;
    227     struct eventpoll *ep;
    228     struct epitem *epi;
    229     struct epoll_event epds;
    230     error = -EFAULT;
    231     /*
    232      * 错误处理以及从用户空间将epoll_event结构copy到内核空间.
    233      */
    234     if (ep_op_has_event(op) &&
    235         copy_from_user(&epds, event, sizeof(struct epoll_event)))
    236         goto error_return;
    237     /* Get the "struct file *" for the eventpoll file */
    238     /* 取得struct file结构, epfd既然是真正的fd, 那么内核空间
    239      * 就会有与之对于的一个struct file结构
    240      * 这个结构在epoll_create1()中, 由函数anon_inode_getfd()分配 */
    241     error = -EBADF;
    242     file = fget(epfd);
    243     if (!file)
    244         goto error_return;
    245     /* Get the "struct file *" for the target file */
    246     /* 我们需要监听的fd, 它当然也有个struct file结构, 上下2个不要搞混了哦 */
    247     tfile = fget(fd);
    248     if (!tfile)
    249         goto error_fput;
    250     /* The target file descriptor must support poll */
    251     error = -EPERM;
    252     /* 如果监听的文件不支持poll, 那就没辙了.
    253      * 你知道什么情况下, 文件会不支持poll吗?
    254      */
    255     if (!tfile->f_op || !tfile->f_op->poll)
    256         goto error_tgt_fput;
    257     /*
    258      * We have to check that the file structure underneath the file descriptor
    259      * the user passed to us _is_ an eventpoll file. And also we do not permit
    260      * adding an epoll file descriptor inside itself.
    261      */
    262     error = -EINVAL;
    263     /* epoll不能自己监听自己... */
    264     if (file == tfile || !is_file_epoll(file))
    265         goto error_tgt_fput;
    266     /*
    267      * At this point it is safe to assume that the "private_data" contains
    268      * our own data structure.
    269      */
    270     /* 取到我们的eventpoll结构, 来自与epoll_create1()中的分配 */
    271     ep = file->private_data;
    272     /* 接下来的操作有可能修改数据结构内容, 锁之~ */
    273     mutex_lock(&ep->mtx);
    274     /*
    275      * Try to lookup the file inside our RB tree, Since we grabbed "mtx"
    276      * above, we can be sure to be able to use the item looked up by
    277      * ep_find() till we release the mutex.
    278      */
    279     /* 对于每一个监听的fd, 内核都有分配一个epitem结构,
    280      * 而且我们也知道, epoll是不允许重复添加fd的,
    281      * 所以我们首先查找该fd是不是已经存在了.
    282      * ep_find()其实就是RBTREE查找, 跟C++STL的map差不多一回事, O(lgn)的时间复杂度.
    283      */
    284     epi = ep_find(ep, tfile, fd);
    285     error = -EINVAL;
    286     switch (op) {
    287         /* 首先我们关心添加 */
    288     case EPOLL_CTL_ADD:
    289         if (!epi) {
    290             /* 之前的find没有找到有效的epitem, 证明是第一次插入, 接受!
    291              * 这里我们可以知道, POLLERR和POLLHUP事件内核总是会关心的
    292              * */
    293             epds.events |= POLLERR | POLLHUP;
    294             /* rbtree插入, 详情见ep_insert()的分析
    295              * 其实我觉得这里有insert的话, 之前的find应该
    296              * 是可以省掉的... */
    297             error = ep_insert(ep, &epds, tfile, fd);
    298         } else
    299             /* 找到了!? 重复添加! */
    300             error = -EEXIST;
    301         break;
    302         /* 删除和修改操作都比较简单 */
    303     case EPOLL_CTL_DEL:
    304         if (epi)
    305             error = ep_remove(ep, epi);
    306         else
    307             error = -ENOENT;
    308         break;
    309     case EPOLL_CTL_MOD:
    310         if (epi) {
    311             epds.events |= POLLERR | POLLHUP;
    312             error = ep_modify(ep, epi, &epds);
    313         } else
    314             error = -ENOENT;
    315         break;
    316     }
    317     mutex_unlock(&ep->mtx);
    318 error_tgt_fput:
    319     fput(tfile);
    320 error_fput:
    321     fput(file);
    322 error_return:
    323     return error;
    324 }
    325 /* 分配一个eventpoll结构 */
    326 static int ep_alloc(struct eventpoll **pep)
    327 {
    328     int error;
    329     struct user_struct *user;
    330     struct eventpoll *ep;
    331     /* 获取当前用户的一些信息, 比如是不是root啦, 最大监听fd数目啦 */
    332     user = get_current_user();
    333     error = -ENOMEM;
    334     ep = kzalloc(sizeof(*ep), GFP_KERNEL);
    335     if (unlikely(!ep))
    336         goto free_uid;
    337     /* 这些都是初始化啦 */
    338     spin_lock_init(&ep->lock);
    339     mutex_init(&ep->mtx);
    340     init_waitqueue_head(&ep->wq);//初始化自己睡在的等待队列
    341     init_waitqueue_head(&ep->poll_wait);//初始化
    342     INIT_LIST_HEAD(&ep->rdllist);//初始化就绪链表
    343     ep->rbr = RB_ROOT;
    344     ep->ovflist = EP_UNACTIVE_PTR;
    345     ep->user = user;
    346     *pep = ep;
    347     return 0;
    348 free_uid:
    349     free_uid(user);
    350     return error;
    351 }
    352 /*
    353  * Must be called with "mtx" held.
    354  */
    355 /*
    356  * ep_insert()在epoll_ctl()中被调用, 完成往epollfd里面添加一个监听fd的工作
    357  * tfile是fd在内核态的struct file结构
    358  */
    359 static int ep_insert(struct eventpoll *ep, struct epoll_event *event,
    360              struct file *tfile, int fd)
    361 {
    362     int error, revents, pwake = 0;
    363     unsigned long flags;
    364     struct epitem *epi;
    365     struct ep_pqueue epq;
    366     /* 查看是否达到当前用户的最大监听数 */
    367     if (unlikely(atomic_read(&ep->user->epoll_watches) >=
    368              max_user_watches))
    369         return -ENOSPC;
    370     /* 从著名的slab中分配一个epitem */
    371     if (!(epi = kmem_cache_alloc(epi_cache, GFP_KERNEL)))
    372         return -ENOMEM;
    373     /* Item initialization follow here ... */
    374     /* 这些都是相关成员的初始化... */
    375     INIT_LIST_HEAD(&epi->rdllink);
    376     INIT_LIST_HEAD(&epi->fllink);
    377     INIT_LIST_HEAD(&epi->pwqlist);
    378     epi->ep = ep;
    379     /* 这里保存了我们需要监听的文件fd和它的file结构 */
    380     ep_set_ffd(&epi->ffd, tfile, fd);
    381     epi->event = *event;
    382     epi->nwait = 0;
    383     /* 这个指针的初值不是NULL哦... */
    384     epi->next = EP_UNACTIVE_PTR;
    385     /* Initialize the poll table using the queue callback */
    386     /* 好, 我们终于要进入到poll的正题了 */
    387     epq.epi = epi;
    388     /* 初始化一个poll_table
    389      * 其实就是指定调用poll_wait(注意不是epoll_wait!!!)时的回调函数,和我们关心哪些events,
    390      * ep_ptable_queue_proc()就是我们的回调啦, 初值是所有event都关心 */
    391     init_poll_funcptr(&epq.pt, ep_ptable_queue_proc);
    392     /*
    393      * Attach the item to the poll hooks and get current event bits.
    394      * We can safely use the file* here because its usage count has
    395      * been increased by the caller of this function. Note that after
    396      * this operation completes, the poll callback can start hitting
    397      * the new item.
    398      */
    399     /* 这一部很关键, 也比较难懂, 完全是内核的poll机制导致的...
    400      * 首先, f_op->poll()一般来说只是个wrapper, 它会调用真正的poll实现,
    401      * 拿UDP的socket来举例, 这里就是这样的调用流程: f_op->poll(), sock_poll(),
    402      * udp_poll(), datagram_poll(), sock_poll_wait(), 最后调用到我们上面指定的
    403      * ep_ptable_queue_proc()这个回调函数...(好深的调用路径...).
    404      * 完成这一步, 我们的epitem就跟这个socket关联起来了, 当它有状态变化时,
    405      * 会通过ep_poll_callback()来通知.
    406      * 最后, 这个函数还会查询当前的fd是不是已经有啥event已经ready了, 有的话
    407      * 会将event返回. */
    408     revents = tfile->f_op->poll(tfile, &epq.pt);
    409     /*
    410      * We have to check if something went wrong during the poll wait queue
    411      * install process. Namely an allocation for a wait queue failed due
    412      * high memory pressure.
    413      */
    414     error = -ENOMEM;
    415     if (epi->nwait < 0)
    416         goto error_unregister;
    417     /* Add the current item to the list of active epoll hook for this file */
    418     /* 这个就是每个文件会将所有监听自己的epitem链起来 */
    419     spin_lock(&tfile->f_lock);
    420     list_add_tail(&epi->fllink, &tfile->f_ep_links);
    421     spin_unlock(&tfile->f_lock);
    422     /*
    423      * Add the current item to the RB tree. All RB tree operations are
    424      * protected by "mtx", and ep_insert() is called with "mtx" held.
    425      */
    426     /* 都搞定后, 将epitem插入到对应的eventpoll中去 */
    427     ep_rbtree_insert(ep, epi);
    428     /* We have to drop the new item inside our item list to keep track of it */
    429     spin_lock_irqsave(&ep->lock, flags);
    430     /* If the file is already "ready" we drop it inside the ready list */
    431     /* 到达这里后, 如果我们监听的fd已经有事件发生, 那就要处理一下 */
    432     if ((revents & event->events) && !ep_is_linked(&epi->rdllink)) {
    433         /* 将当前的epitem加入到ready list中去 */
    434         list_add_tail(&epi->rdllink, &ep->rdllist);
    435         /* Notify waiting tasks that events are available */
    436         /* 谁在epoll_wait, 就唤醒它... */
    437         if (waitqueue_active(&ep->wq))
    438             wake_up_locked(&ep->wq);
    439         /* 谁在epoll当前的epollfd, 也唤醒它... */
    440         if (waitqueue_active(&ep->poll_wait))
    441             pwake++;
    442     }
    443     spin_unlock_irqrestore(&ep->lock, flags);
    444     atomic_inc(&ep->user->epoll_watches);
    445     /* We have to call this outside the lock */
    446     if (pwake)
    447         ep_poll_safewake(&ep->poll_wait);
    448     return 0;
    449 error_unregister:
    450     ep_unregister_pollwait(ep, epi);
    451     /*
    452      * We need to do this because an event could have been arrived on some
    453      * allocated wait queue. Note that we don't care about the ep->ovflist
    454      * list, since that is used/cleaned only inside a section bound by "mtx".
    455      * And ep_insert() is called with "mtx" held.
    456      */
    457     spin_lock_irqsave(&ep->lock, flags);
    458     if (ep_is_linked(&epi->rdllink))
    459         list_del_init(&epi->rdllink);
    460     spin_unlock_irqrestore(&ep->lock, flags);
    461     kmem_cache_free(epi_cache, epi);
    462     return error;
    463 }
    464 /*
    465  * This is the callback that is used to add our wait queue to the
    466  * target file wakeup lists.
    467  */
    468 /*
    469  * 该函数在调用f_op->poll()时会被调用.
    470  * 也就是epoll主动poll某个fd时, 用来将epitem与指定的fd关联起来的.
    471  * 关联的办法就是使用等待队列(waitqueue)
    472  */
    473 static void ep_ptable_queue_proc(struct file *file, wait_queue_head_t *whead,
    474                  poll_table *pt)
    475 {
    476     struct epitem *epi = ep_item_from_epqueue(pt);
    477     struct eppoll_entry *pwq;
    478     if (epi->nwait >= 0 && (pwq = kmem_cache_alloc(pwq_cache, GFP_KERNEL))) {
    479         /* 初始化等待队列, 指定ep_poll_callback为唤醒时的回调函数,
    480          * 当我们监听的fd发生状态改变时, 也就是队列头被唤醒时,
    481          * 指定的回调函数将会被调用. */
    482         init_waitqueue_func_entry(&pwq->wait, ep_poll_callback);
    483         pwq->whead = whead;
    484         pwq->base = epi;
    485         /* 将刚分配的等待队列成员加入到头中, 头是由fd持有的 */
    486         add_wait_queue(whead, &pwq->wait);
    487         list_add_tail(&pwq->llink, &epi->pwqlist);
    488         /* nwait记录了当前epitem加入到了多少个等待队列中,
    489          * 我认为这个值最大也只会是1... */
    490         epi->nwait++;
    491     } else {
    492         /* We have to signal that an error occurred */
    493         epi->nwait = -1;
    494     }
    495 }
    496 /*
    497  * This is the callback that is passed to the wait queue wakeup
    498  * machanism. It is called by the stored file descriptors when they
    499  * have events to report.
    500  */
    501 /*
    502  * 这个是关键性的回调函数, 当我们监听的fd发生状态改变时, 它会被调用.
    503  * 参数key被当作一个unsigned long整数使用, 携带的是events.
    504  */
    505 static int ep_poll_callback(wait_queue_t *wait, unsigned mode, int sync, void *key)
    506 {
    507     int pwake = 0;
    508     unsigned long flags;
    509     struct epitem *epi = ep_item_from_wait(wait);//从等待队列获取epitem.需要知道哪个进程挂载到这个设备
    510     struct eventpoll *ep = epi->ep;//获取
    511     spin_lock_irqsave(&ep->lock, flags);
    512     /*
    513      * If the event mask does not contain any poll(2) event, we consider the
    514      * descriptor to be disabled. This condition is likely the effect of the
    515      * EPOLLONESHOT bit that disables the descriptor when an event is received,
    516      * until the next EPOLL_CTL_MOD will be issued.
    517      */
    518     if (!(epi->event.events & ~EP_PRIVATE_BITS))
    519         goto out_unlock;
    520     /*
    521      * Check the events coming with the callback. At this stage, not
    522      * every device reports the events in the "key" parameter of the
    523      * callback. We need to be able to handle both cases here, hence the
    524      * test for "key" != NULL before the event match test.
    525      */
    526     /* 没有我们关心的event... */
    527     if (key && !((unsigned long) key & epi->event.events))
    528         goto out_unlock;
    529     /*
    530      * If we are trasfering events to userspace, we can hold no locks
    531      * (because we're accessing user memory, and because of linux f_op->poll()
    532      * semantics). All the events that happens during that period of time are
    533      * chained in ep->ovflist and requeued later on.
    534      */
    535     /*
    536      * 这里看起来可能有点费解, 其实干的事情比较简单:
    537      * 如果该callback被调用的同时, epoll_wait()已经返回了,
    538      * 也就是说, 此刻应用程序有可能已经在循环获取events,
    539      * 这种情况下, 内核将此刻发生event的epitem用一个单独的链表
    540      * 链起来, 不发给应用程序, 也不丢弃, 而是在下一次epoll_wait
    541      * 时返回给用户.
    542      */
    543     if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
    544         if (epi->next == EP_UNACTIVE_PTR) {
    545             epi->next = ep->ovflist;
    546             ep->ovflist = epi;
    547         }
    548         goto out_unlock;
    549     }
    550     /* If this file is already in the ready list we exit soon */
    551     /* 将当前的epitem放入ready list */
    552     if (!ep_is_linked(&epi->rdllink))
    553         list_add_tail(&epi->rdllink, &ep->rdllist);
    554     /*
    555      * Wake up ( if active ) both the eventpoll wait list and the ->poll()
    556      * wait list.
    557      */
    558     /* 唤醒epoll_wait... */
    559     if (waitqueue_active(&ep->wq))
    560         wake_up_locked(&ep->wq);
    561     /* 如果epollfd也在被poll, 那就唤醒队列里面的所有成员. */
    562     if (waitqueue_active(&ep->poll_wait))
    563         pwake++;
    564 out_unlock:
    565     spin_unlock_irqrestore(&ep->lock, flags);
    566     /* We have to call this outside the lock */
    567     if (pwake)
    568         ep_poll_safewake(&ep->poll_wait);
    569     return 1;
    570 }
    571 /*
    572  * Implement the event wait interface for the eventpoll file. It is the kernel
    573  * part of the user space epoll_wait(2).
    574  */
    575 SYSCALL_DEFINE4(epoll_wait, int, epfd, struct epoll_event __user *, events,
    576         int, maxevents, int, timeout)
    577 {
    578     int error;
    579     struct file *file;
    580     struct eventpoll *ep;
    581     /* The maximum number of event must be greater than zero */
    582     if (maxevents <= 0 || maxevents > EP_MAX_EVENTS)
    583         return -EINVAL;
    584     /* Verify that the area passed by the user is writeable */
    585     /* 这个地方有必要说明一下:
    586      * 内核对应用程序采取的策略是"绝对不信任",
    587      * 所以内核跟应用程序之间的数据交互大都是copy, 不允许(也时候也是不能...)指针引用.
    588      * epoll_wait()需要内核返回数据给用户空间, 内存由用户程序提供,
    589      * 所以内核会用一些手段来验证这一段内存空间是不是有效的.
    590      */
    591     if (!access_ok(VERIFY_WRITE, events, maxevents * sizeof(struct epoll_event))) {
    592         error = -EFAULT;
    593         goto error_return;
    594     }
    595     /* Get the "struct file *" for the eventpoll file */
    596     error = -EBADF;
    597     /* 获取epollfd的struct file, epollfd也是文件嘛 */
    598     file = fget(epfd);
    599     if (!file)
    600         goto error_return;
    601     /*
    602      * We have to check that the file structure underneath the fd
    603      * the user passed to us _is_ an eventpoll file.
    604      */
    605     error = -EINVAL;
    606     /* 检查一下它是不是一个真正的epollfd... */
    607     if (!is_file_epoll(file))
    608         goto error_fput;
    609     /*
    610      * At this point it is safe to assume that the "private_data" contains
    611      * our own data structure.
    612      */
    613     /* 获取eventpoll结构 */
    614     ep = file->private_data;
    615     /* Time to fish for events ... */
    616     /* OK, 睡觉, 等待事件到来~~ */
    617     error = ep_poll(ep, events, maxevents, timeout);
    618 error_fput:
    619     fput(file);
    620 error_return:
    621     return error;
    622 }
    623 /* 这个函数真正将执行epoll_wait的进程带入睡眠状态... */
    624 static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
    625            int maxevents, long timeout)
    626 {
    627     int res, eavail;
    628     unsigned long flags;
    629     long jtimeout;
    630     wait_queue_t wait;//等待队列
    631     /*
    632      * Calculate the timeout by checking for the "infinite" value (-1)
    633      * and the overflow condition. The passed timeout is in milliseconds,
    634      * that why (t * HZ) / 1000.
    635      */
    636     /* 计算睡觉时间, 毫秒要转换为HZ */
    637     jtimeout = (timeout < 0 || timeout >= EP_MAX_MSTIMEO) ?
    638         MAX_SCHEDULE_TIMEOUT : (timeout * HZ + 999) / 1000;
    639 retry:
    640     spin_lock_irqsave(&ep->lock, flags);
    641     res = 0;
    642     /* 如果ready list不为空, 就不睡了, 直接干活... */
    643     if (list_empty(&ep->rdllist)) {
    644         /*
    645          * We don't have any available event to return to the caller.
    646          * We need to sleep here, and we will be wake up by
    647          * ep_poll_callback() when events will become available.
    648          */
    649         /* OK, 初始化一个等待队列, 准备直接把自己挂起,
    650          * 注意current是一个宏, 代表当前进程 */
    651         init_waitqueue_entry(&wait, current);//初始化等待队列,wait表示当前进程
    652         __add_wait_queue_exclusive(&ep->wq, &wait);//挂载到ep结构的等待队列
    653         for (;;) {
    654             /*
    655              * We don't want to sleep if the ep_poll_callback() sends us
    656              * a wakeup in between. That's why we set the task state
    657              * to TASK_INTERRUPTIBLE before doing the checks.
    658              */
    659             /* 将当前进程设置位睡眠, 但是可以被信号唤醒的状态,
    660              * 注意这个设置是"将来时", 我们此刻还没睡! */
    661             set_current_state(TASK_INTERRUPTIBLE);
    662             /* 如果这个时候, ready list里面有成员了,
    663              * 或者睡眠时间已经过了, 就直接不睡了... */
    664             if (!list_empty(&ep->rdllist) || !jtimeout)
    665                 break;
    666             /* 如果有信号产生, 也起床... */
    667             if (signal_pending(current)) {
    668                 res = -EINTR;
    669                 break;
    670             }
    671             /* 啥事都没有,解锁, 睡觉... */
    672             spin_unlock_irqrestore(&ep->lock, flags);
    673             /* jtimeout这个时间后, 会被唤醒,
    674              * ep_poll_callback()如果此时被调用,
    675              * 那么我们就会直接被唤醒, 不用等时间了...
    676              * 再次强调一下ep_poll_callback()的调用时机是由被监听的fd
    677              * 的具体实现, 比如socket或者某个设备驱动来决定的,
    678              * 因为等待队列头是他们持有的, epoll和当前进程
    679              * 只是单纯的等待...
    680              **/
    681             jtimeout = schedule_timeout(jtimeout);//睡觉
    682             spin_lock_irqsave(&ep->lock, flags);
    683         }
    684         __remove_wait_queue(&ep->wq, &wait);
    685         /* OK 我们醒来了... */
    686         set_current_state(TASK_RUNNING);
    687     }
    688     /* Is it worth to try to dig for events ? */
    689     eavail = !list_empty(&ep->rdllist) || ep->ovflist != EP_UNACTIVE_PTR;
    690     spin_unlock_irqrestore(&ep->lock, flags);
    691     /*
    692      * Try to transfer events to user space. In case we get 0 events and
    693      * there's still timeout left over, we go trying again in search of
    694      * more luck.
    695      */
    696     /* 如果一切正常, 有event发生, 就开始准备数据copy给用户空间了... */
    697     if (!res && eavail &&
    698         !(res = ep_send_events(ep, events, maxevents)) && jtimeout)
    699         goto retry;
    700     return res;
    701 }
    702 /* 这个简单, 我们直奔下一个... */
    703 static int ep_send_events(struct eventpoll *ep,
    704               struct epoll_event __user *events, int maxevents)
    705 {
    706     struct ep_send_events_data esed;
    707     esed.maxevents = maxevents;
    708     esed.events = events;
    709     return ep_scan_ready_list(ep, ep_send_events_proc, &esed);
    710 }
    711 /**
    712  * ep_scan_ready_list - Scans the ready list in a way that makes possible for
    713  *                      the scan code, to call f_op->poll(). Also allows for
    714  *                      O(NumReady) performance.
    715  *
    716  * @ep: Pointer to the epoll private data structure.
    717  * @sproc: Pointer to the scan callback.
    718  * @priv: Private opaque data passed to the @sproc callback.
    719  *
    720  * Returns: The same integer error code returned by the @sproc callback.
    721  */
    722 static int ep_scan_ready_list(struct eventpoll *ep,
    723                   int (*sproc)(struct eventpoll *,
    724                        struct list_head *, void *),
    725                   void *priv)
    726 {
    727     int error, pwake = 0;
    728     unsigned long flags;
    729     struct epitem *epi, *nepi;
    730     LIST_HEAD(txlist);
    731     /*
    732      * We need to lock this because we could be hit by
    733      * eventpoll_release_file() and epoll_ctl().
    734      */
    735     mutex_lock(&ep->mtx);
    736     /*
    737      * Steal the ready list, and re-init the original one to the
    738      * empty list. Also, set ep->ovflist to NULL so that events
    739      * happening while looping w/out locks, are not lost. We cannot
    740      * have the poll callback to queue directly on ep->rdllist,
    741      * because we want the "sproc" callback to be able to do it
    742      * in a lockless way.
    743      */
    744     spin_lock_irqsave(&ep->lock, flags);
    745     /* 这一步要注意, 首先, 所有监听到events的epitem都链到rdllist上了,
    746      * 但是这一步之后, 所有的epitem都转移到了txlist上, 而rdllist被清空了,
    747      * 要注意哦, rdllist已经被清空了! */
    748     list_splice_init(&ep->rdllist, &txlist);
    749     /* ovflist, 在ep_poll_callback()里面我解释过, 此时此刻我们不希望
    750      * 有新的event加入到ready list中了, 保存后下次再处理... */
    751     ep->ovflist = NULL;
    752     spin_unlock_irqrestore(&ep->lock, flags);
    753     /*
    754      * Now call the callback function.
    755      */
    756     /* 在这个回调函数里面处理每个epitem
    757      * sproc 就是 ep_send_events_proc, 下面会注释到. */
    758     error = (*sproc)(ep, &txlist, priv);
    759     spin_lock_irqsave(&ep->lock, flags);
    760     /*
    761      * During the time we spent inside the "sproc" callback, some
    762      * other events might have been queued by the poll callback.
    763      * We re-insert them inside the main ready-list here.
    764      */
    765     /* 现在我们来处理ovflist, 这些epitem都是我们在传递数据给用户空间时
    766      * 监听到了事件. */
    767     for (nepi = ep->ovflist; (epi = nepi) != NULL;
    768          nepi = epi->next, epi->next = EP_UNACTIVE_PTR) {
    769         /*
    770          * We need to check if the item is already in the list.
    771          * During the "sproc" callback execution time, items are
    772          * queued into ->ovflist but the "txlist" might already
    773          * contain them, and the list_splice() below takes care of them.
    774          */
    775         /* 将这些直接放入readylist */
    776         if (!ep_is_linked(&epi->rdllink))
    777             list_add_tail(&epi->rdllink, &ep->rdllist);
    778     }
    779     /*
    780      * We need to set back ep->ovflist to EP_UNACTIVE_PTR, so that after
    781      * releasing the lock, events will be queued in the normal way inside
    782      * ep->rdllist.
    783      */
    784     ep->ovflist = EP_UNACTIVE_PTR;
    785     /*
    786      * Quickly re-inject items left on "txlist".
    787      */
    788     /* 上一次没有处理完的epitem, 重新插入到ready list */
    789     list_splice(&txlist, &ep->rdllist);
    790     /* ready list不为空, 直接唤醒... */
    791     if (!list_empty(&ep->rdllist)) {
    792         /*
    793          * Wake up (if active) both the eventpoll wait list and
    794          * the ->poll() wait list (delayed after we release the lock).
    795          */
    796         if (waitqueue_active(&ep->wq))
    797             wake_up_locked(&ep->wq);
    798         if (waitqueue_active(&ep->poll_wait))
    799             pwake++;
    800     }
    801     spin_unlock_irqrestore(&ep->lock, flags);
    802     mutex_unlock(&ep->mtx);
    803     /* We have to call this outside the lock */
    804     if (pwake)
    805         ep_poll_safewake(&ep->poll_wait);
    806     return error;
    807 }
    808 /* 该函数作为callbakc在ep_scan_ready_list()中被调用
    809  * head是一个链表, 包含了已经ready的epitem,
    810  * 这个不是eventpoll里面的ready list, 而是上面函数中的txlist.
    811  */
    812 static int ep_send_events_proc(struct eventpoll *ep, struct list_head *head,
    813                    void *priv)
    814 {
    815     struct ep_send_events_data *esed = priv;
    816     int eventcnt;
    817     unsigned int revents;
    818     struct epitem *epi;
    819     struct epoll_event __user *uevent;
    820     /*
    821      * We can loop without lock because we are passed a task private list.
    822      * Items cannot vanish during the loop because ep_scan_ready_list() is
    823      * holding "mtx" during this call.
    824      */
    825     /* 扫描整个链表... */
    826     for (eventcnt = 0, uevent = esed->events;
    827          !list_empty(head) && eventcnt < esed->maxevents;) {
    828         /* 取出第一个成员 */
    829         epi = list_first_entry(head, struct epitem, rdllink);
    830         /* 然后从链表里面移除 */
    831         list_del_init(&epi->rdllink);
    832         /* 读取events,
    833          * 注意events我们ep_poll_callback()里面已经取过一次了, 为啥还要再取?
    834          * 1. 我们当然希望能拿到此刻的最新数据, events是会变的~
    835          * 2. 不是所有的poll实现, 都通过等待队列传递了events, 有可能某些驱动压根没传
    836          * 必须主动去读取. */
    837         revents = epi->ffd.file->f_op->poll(epi->ffd.file, NULL) &
    838             epi->event.events;
    839         if (revents) {
    840             /* 将当前的事件和用户传入的数据都copy给用户空间,
    841              * 就是epoll_wait()后应用程序能读到的那一堆数据. */
    842             if (__put_user(revents, &uevent->events) ||
    843                 __put_user(epi->event.data, &uevent->data)) {
    844                 list_add(&epi->rdllink, head);
    845                 return eventcnt ? eventcnt : -EFAULT;
    846             }
    847             eventcnt++;
    848             uevent++;
    849             if (epi->event.events & EPOLLONESHOT)
    850                 epi->event.events &= EP_PRIVATE_BITS;
    851             else if (!(epi->event.events & EPOLLET)) {
    852                 /* 嘿嘿, EPOLLET和非ET的区别就在这一步之差呀~
    853                  * 如果是ET, epitem是不会再进入到readly list,
    854                  * 除非fd再次发生了状态改变, ep_poll_callback被调用.
    855                  * 如果是非ET, 不管你还有没有有效的事件或者数据,
    856                  * 都会被重新插入到ready list, 再下一次epoll_wait
    857                  * 时, 会立即返回, 并通知给用户空间. 当然如果这个
    858                  * 被监听的fds确实没事件也没数据了, epoll_wait会返回一个0,
    859                  * 空转一次.
    860                  */
    861                 list_add_tail(&epi->rdllink, &ep->rdllist);
    862             }
    863         }
    864     }
    865     return eventcnt;
    866 }
    867 /* ep_free在epollfd被close时调用,
    868  * 释放一些资源而已, 比较简单 */
    869 static void ep_free(struct eventpoll *ep)
    870 {
    871     struct rb_node *rbp;
    872     struct epitem *epi;
    873     /* We need to release all tasks waiting for these file */
    874     if (waitqueue_active(&ep->poll_wait))
    875         ep_poll_safewake(&ep->poll_wait);
    876     /*
    877      * We need to lock this because we could be hit by
    878      * eventpoll_release_file() while we're freeing the "struct eventpoll".
    879      * We do not need to hold "ep->mtx" here because the epoll file
    880      * is on the way to be removed and no one has references to it
    881      * anymore. The only hit might come from eventpoll_release_file() but
    882      * holding "epmutex" is sufficent here.
    883      */
    884     mutex_lock(&epmutex);
    885     /*
    886      * Walks through the whole tree by unregistering poll callbacks.
    887      */
    888     for (rbp = rb_first(&ep->rbr); rbp; rbp = rb_next(rbp)) {
    889         epi = rb_entry(rbp, struct epitem, rbn);
    890         ep_unregister_pollwait(ep, epi);
    891     }
    892     /*
    893      * Walks through the whole tree by freeing each "struct epitem". At this
    894      * point we are sure no poll callbacks will be lingering around, and also by
    895      * holding "epmutex" we can be sure that no file cleanup code will hit
    896      * us during this operation. So we can avoid the lock on "ep->lock".
    897      */
    898     /* 之所以在关闭epollfd之前不需要调用epoll_ctl移除已经添加的fd,
    899      * 是因为这里已经做了... */
    900     while ((rbp = rb_first(&ep->rbr)) != NULL) {
    901         epi = rb_entry(rbp, struct epitem, rbn);
    902         ep_remove(ep, epi);
    903     }
    904     mutex_unlock(&epmutex);
    905     mutex_destroy(&ep->mtx);
    906     free_uid(ep->user);
    907     kfree(ep);
    908 }
    909 /* File callbacks that implement the eventpoll file behaviour */
    910 static const struct file_operations eventpoll_fops = {
    911     .release    = ep_eventpoll_release,
    912     .poll       = ep_eventpoll_poll
    913 };
    914 /* Fast test to see if the file is an evenpoll file */
    915 static inline int is_file_epoll(struct file *f)
    916 {
    917     return f->f_op == &eventpoll_fops;
    918 }
    919 /* OK, eventpoll我认为比较重要的函数都注释完了... */
    epoll_create
    从slab缓存中创建一个eventpoll对象,并且创建一个匿名的fd跟fd对应的file对象,
    而eventpoll对象保存在struct file结构的private指针中,并且返回,
    该fd对应的file operations只是实现了poll跟release操作

    创建eventpoll对象的初始化操作
    获取当前用户信息,是不是root,最大监听fd数目等并且保存到eventpoll对象中
    初始化等待队列,初始化就绪链表,初始化红黑树的头结点

    epoll_ctl操作
    将epoll_event结构拷贝到内核空间中
    并且判断加入的fd是否支持poll结构(epoll,poll,selectI/O多路复用必须支持poll操作).
    并且从epfd->file->privatedata获取event_poll对象,根据op区分是添加删除还是修改,
    首先在eventpoll结构中的红黑树查找是否已经存在了相对应的fd,没找到就支持插入操作,否则报重复的错误.
    相对应的修改,删除比较简单就不啰嗦了

    插入操作时,会创建一个与fd对应的epitem结构,并且初始化相关成员,比如保存监听的fd跟file结构之类的
    重要的是指定了调用poll_wait时的回调函数用于数据就绪时唤醒进程,(其内部,初始化设备的等待队列,将该进程注册到等待队列)完成这一步, 我们的epitem就跟这个socket关联起来了, 当它有状态变化时,
    会通过ep_poll_callback()来通知.
    最后调用加入的fd的file operation->poll函数(最后会调用poll_wait操作)用于完成注册操作.
    最后将epitem结构添加到红黑树中

    epoll_wait操作
    计算睡眠时间(如果有),判断eventpoll对象的链表是否为空,不为空那就干活不睡明.并且初始化一个等待队列,把自己挂上去,设置自己的进程状态
    为可睡眠状态.判断是否有信号到来(有的话直接被中断醒来,),如果啥事都没有那就调用schedule_timeout进行睡眠,如果超时或者被唤醒,首先从自己初始化的等待队列删除
    ,然后开始拷贝资源给用户空间了
    拷贝资源则是先把就绪事件链表转移到中间链表,然后挨个遍历拷贝到用户空间,
    并且挨个判断其是否为水平触发,是的话再次插入到就绪链表
  • 相关阅读:
    Core3.0部署后访问接口提示500.30
    Core3.0返回的Json数据大小写格式问题
    linux内核分析之fork()
    【转】【机器人学:运动规划】OMPL开源运动规划库的安装和demo
    【转】毫米波雷达和激光雷达的对比
    [转]开发者需要的 9 款代码比较工具
    [转]关于特征点法、直接法、光流法slam的对比
    [转]【视觉 SLAM-2】 视觉SLAM- ORB 源码详解 2
    [转]ORB特征提取-----FAST角点检测
    [转]图像金字塔
  • 原文地址:https://www.cnblogs.com/wsw-seu/p/8274195.html
Copyright © 2011-2022 走看看