zoukankan      html  css  js  c++  java
  • (Delphi) Using the Disk Cache 使用磁盘缓存

    The Chilkat Spider component has disk caching capabilities. To setup a disk cache, create a new directory anywhere on your local hard drive and set the CacheDir property to the path. For example, you might create "c:/spiderCache/". The UpdateCache property controls whether downloaded pages are saved to the cache. The FetchFromCache property controls whether the cache is first checked for pages. The LastFromCache property tells whether the last URL fetched came from cache or not.

    uses
        Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
        Dialogs, StdCtrls,
        SPIDERXLib_TLB,
        OleCtrls;
    
    ...
    
    procedure TForm1.Button1Click(Sender: TObject);
    var
    spider: TSpider;
    i: Integer;
    success: Integer;
    
    begin
    //  The Chilkat Spider component/library is free.
    spider := TSpider.Create(Self);
    
    //  Set our cache directory and make sure saving-to-cache and fetching-from-cache
    //  are both turned on:
    spider.CacheDir := 'c:/spiderCache/';
    spider.FetchFromCache := 1;
    spider.UpdateCache := 1;
    
    //  If you run this code twice, you'll find that the 2nd run is extremely fast
    //  because the pages will be retrieved from cache.
    
    //  The spider object crawls a single web site at a time.  As you'll see
    //  in later examples, you can collect outbound links and use them to
    //  crawl the web.  For now, we'll simply spider 10 pages of chilkatsoft.com
    spider.Initialize('www.chilkatsoft.com');
    
    //  Add the 1st URL:
    spider.AddUnspidered('http://www.chilkatsoft.com/');
    
    //  Begin crawling the site by calling CrawlNext repeatedly.
    
    for i := 0 to 9 do
      begin
    
        success := spider.CrawlNext();
        if (success = 1) then
          begin
            //  Show the URL of the page just spidered.
            Memo1.Lines.Add(spider.LastUrl);
            //  The HTML is available in the LastHtml property
          end
        else
          begin
            //  Did we get an error or are there no more URLs to crawl?
            if (spider.NumUnspidered = 0) then
              begin
                ShowMessage('No more URLs to spider');
              end
            else
              begin
                ShowMessage(spider.LastErrorText);
              end;
          end;
    
        //  Sleep 1 second before spidering the next URL.
        //  The reason for waiting a short time before the next fetch is to prevent
        //  undue stress on the web server.  However, if the last page was retrieved
        //  from cache, there is no need to pause.
        if (spider.LastFromCache <> 1) then
          begin
            spider.SleepMs(1000);
          end;
      end;
    
    
    end;
  • 相关阅读:
    IT职场求生法则
    设计模式六大原则
    非win7系统访问win7系统发布的网站
    C#自定义导出Excel
    js操作table元素,表格的行列新增、删除汇集
    一个真正合格的程序员应该具备的素质
    项目心得
    项目心得1
    Spring boot 使用profile完成不同环境的maven打包功能
    关于std容器类的内存使用
  • 原文地址:https://www.cnblogs.com/MaxWoods/p/3639964.html
Copyright © 2011-2022 走看看