ip代理
#https://stackoverflow.com/questions/4710483/scrapy-and-proxies
客户端信息 User-agent
#1.单个的user-agent 直接在item中定义 然后request传参
#2.轮换的user-agent 定义一个中间件 然后在设置中挂载入download_middlewares
# http://blog.csdn.net/sinat_28680819/article/details/71597421
cookie 模拟登录
#1.构造cookie cookie = {'Cookie': 'qm_username=137958873x0; qm_sid=8a4cce2f4413b4a5c9981093942b3f6f,qMUZrWmt0Z0XQbVZWKlNBZXUzT0xCSXJNRHNXU1NDVzd6MXZFQjJSbGZxMF8.; RK=/jt+Uh72a5; pgv_pvid=2786550760; pgv_info=ssid=s5861035317; ptui_loginuin=1379588730; ptisp=ctc; ptcz=e3d339f47c356076793ff4c270b572e35ed69746057039ee2e78677e391793b9; pt2gguin=o1379588730; uin=o1379588730; skey=@ssNmMQP3p; p_uin=o1379588730; p_skey=2w78648Kd9wuwxK9lsiDM02MQFJfSIhEuxhhE*aH-SU_; pt4_token=aGGiHcty94vO0iC8mxQ*OgkHOI6fZmdzQCxsb-baX1U_'}
#2. html = requests.get(url,cookies=cookie).content 在request中传参
# 注意需要将cookie 转换为字典类型
多线程 默认存在的
ajax 解决 实际路径不是网站的路径 是json 然后通过json.loads 和for进行层层的配对就可以实现
# http://www.scrapingauthority.com/scrapy-ajax
js的处理
splash (利用docker,然后做分布式爬虫)推荐
selenium+phantom.js
download_delay(下载延迟)
#setting download_delay (全局)
# item download_delay=n 就是针对某个爬虫
# 默认是0.5*download_delay ~1.5*download_delay