zoukankan      html  css  js  c++  java
  • (Delphi) Using the Disk Cache 使用磁盘缓存

    The Chilkat Spider component has disk caching capabilities. To setup a disk cache, create a new directory anywhere on your local hard drive and set the CacheDir property to the path. For example, you might create "c:/spiderCache/". The UpdateCache property controls whether downloaded pages are saved to the cache. The FetchFromCache property controls whether the cache is first checked for pages. The LastFromCache property tells whether the last URL fetched came from cache or not.

    uses
        Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
        Dialogs, StdCtrls,
        SPIDERXLib_TLB,
        OleCtrls;
    
    ...
    
    procedure TForm1.Button1Click(Sender: TObject);
    var
    spider: TSpider;
    i: Integer;
    success: Integer;
    
    begin
    //  The Chilkat Spider component/library is free.
    spider := TSpider.Create(Self);
    
    //  Set our cache directory and make sure saving-to-cache and fetching-from-cache
    //  are both turned on:
    spider.CacheDir := 'c:/spiderCache/';
    spider.FetchFromCache := 1;
    spider.UpdateCache := 1;
    
    //  If you run this code twice, you'll find that the 2nd run is extremely fast
    //  because the pages will be retrieved from cache.
    
    //  The spider object crawls a single web site at a time.  As you'll see
    //  in later examples, you can collect outbound links and use them to
    //  crawl the web.  For now, we'll simply spider 10 pages of chilkatsoft.com
    spider.Initialize('www.chilkatsoft.com');
    
    //  Add the 1st URL:
    spider.AddUnspidered('http://www.chilkatsoft.com/');
    
    //  Begin crawling the site by calling CrawlNext repeatedly.
    
    for i := 0 to 9 do
      begin
    
        success := spider.CrawlNext();
        if (success = 1) then
          begin
            //  Show the URL of the page just spidered.
            Memo1.Lines.Add(spider.LastUrl);
            //  The HTML is available in the LastHtml property
          end
        else
          begin
            //  Did we get an error or are there no more URLs to crawl?
            if (spider.NumUnspidered = 0) then
              begin
                ShowMessage('No more URLs to spider');
              end
            else
              begin
                ShowMessage(spider.LastErrorText);
              end;
          end;
    
        //  Sleep 1 second before spidering the next URL.
        //  The reason for waiting a short time before the next fetch is to prevent
        //  undue stress on the web server.  However, if the last page was retrieved
        //  from cache, there is no need to pause.
        if (spider.LastFromCache <> 1) then
          begin
            spider.SleepMs(1000);
          end;
      end;
    
    
    end;
  • 相关阅读:
    源码学习-出差有感
    《java数据结构与算法》系列之“快速排序"
    新征途
    命运总是喜欢开玩笑
    《java数据结构与算法》系列之“简单排序"-冒泡,选择,插入
    秒杀9种排序算法(JavaScript版)
    《进击的巨人》
    Noip2001 提高组 T3
    Noip2011 提高组 Day1 T1 铺地毯 + Day2 T1 计算系数
    Noip2012 提高组 Day1 T1 Vigenère 密码
  • 原文地址:https://www.cnblogs.com/MaxWoods/p/3639964.html
Copyright © 2011-2022 走看看