Cache作为Volley最为核心的一部分,Volley花了重彩来实现它。本章我们顺着Volley的源代码思路往下,来看下Volley对Cache的处理逻辑。
我们回忆一下昨天的简单代码,我们的入口是从构造一个Request队列開始的,而我们并不直接调用new来构造,而是将控制权反转给Volley这个静态工厂来构造。
com.android.volley.toolbox.Volley:
public static RequestQueue newRequestQueue(Context context, HttpStack stack) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { // Prior to Gingerbread, HttpUrlConnection was unreliable. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start(); return queue; }
參数HttpStack用于制定你的HttpStack实现机制,比方是採用apache的http-client还是HttpUrlConnection.当然假设你不指定,Volley也会依据你的sdk版本号给出不同的策略。
而这样的HttpStack对象被Network对象包装起来。
上一节我们说过,为了构造平台统一的网络调用,Volley通过桥接的方式来实现网络调用,而桥接的接口就是这个Network.
Volley的核心在于Cache和Network。
既然两个对象已经构造完了,我们就能够生成request队列RequestQueue.可是,为什么要开启queue.start呢?我们先看一下这个代码:
public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
上一节体系结构我们已经说了,Volley採用生产者和消费者的模式来产生反应堆,而这中反应必需要通过线程的方式来实现。调用了RequestQueue的start之后,将开启一个Cache线程和一定数量的Network线程池。我们看到networkDispatcher的线程池数量由数组mDispatchers指定。而mDispatchers的赋值在RequestQueue的<init>中:
public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; }
如何?是不是认为Volley的代码写的很的浅显合理。好了,到RequestQueue.start開始,我们已经为我们的request构建好了它的上下文环境,我们接着仅仅须要将它add到这个队列中来就能够了;
public <T> Request<T> add(Request<T> request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } // Process requests in the order they are added. request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network. if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } // Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); System.out.println("request.cacheKey = "+(cacheKey)); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. <span style="color:#33cc00;"> Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request<?>>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests);</span> } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
这段代码绿色的部分是点睛之笔,有了暂存的概念,避免了反复的请求。我们add一个Request的时候,须要设置上这个RequestQueue。目的是为了结束的时候将自己从Queue中回收。我们这里还能够看到一个简单的状态机:
request.addMarker("add-to-queue");
这种方法将在request不同的上下文中调用。方便以后查错。之后Request会检查是否须要进行Cache
if (!request.shouldCache()) { mNetworkQueue.add(request); return request; }我们的观念里面,似乎文本数据是不须要Cache的,你能够通过这种方法来实现是否要cache住你的东西,当然不限制你的数据类型。
之后,假设你的请求不被暂存的话,那就被投入Cache反应堆。
我们来看下mCacheQueue这个对象:
private final PriorityBlockingQueue<Request<?>> mCacheQueue = new PriorityBlockingQueue<Request<?>>();
我们看到mCacheQueue本质是一个PriorityBlockingQueue的线程安全队列,并且在这个队列里面是能够进行优先级比較。ImageRequest对Request的优先级进行了指定:
com.android.volley.toolbox.ImageRequest:
@Override public Priority getPriority() { return Priority.LOW; }
你能够自己指定你的Request的优先级别.我们回到CacheDispatcher消费者,CacheDispatcher继承Thread。生成之后直接对Cache初始化mCache.initialize();初始化的目的在于获得Cache中已经存在的数据。Cache的实现类是DiskBasedCache.java我们来看下它怎样实现的初始化:
@Override public synchronized void initialize() { if (!mRootDirectory.exists()) { if (!mRootDirectory.mkdirs()) { VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath()); } return; } File[] files = mRootDirectory.listFiles(); if (files == null) { return; } for (File file : files) { FileInputStream fis = null; try { fis = new FileInputStream(file); CacheHeader entry = CacheHeader.readHeader(fis); entry.size = file.length(); putEntry(entry.key, entry); } catch (IOException e) { if (file != null) { file.delete(); } } finally { try { if (fis != null) { fis.close(); } } catch (IOException ignored) { } } } }
我们能够看出,Volley差别于其它Cache的还有一个特点,就是存储元数据,或者说自己定义了数据格式。
在文件的头部添加了Volley的文件头。这样的做法不仅能从某方面保证了数据的安全性,也能非常好的存储数据元。
public static CacheHeader readHeader(InputStream is) throws IOException { CacheHeader entry = new CacheHeader(); int magic = readInt(is); if (magic != CACHE_MAGIC) { // don't bother deleting, it'll get pruned eventually throw new IOException(); } entry.key = readString(is); entry.etag = readString(is); if (entry.etag.equals("")) { entry.etag = null; } entry.serverDate = readLong(is); entry.ttl = readLong(is); entry.softTtl = readLong(is); entry.responseHeaders = readStringStringMap(is); return entry; }
我们从这段代码里面看到不少信息,Volley自定义了自己的数据魔数,也依照Volley自己的规范来读取元数据。
好的,我们初始化了Cache接下来就是CacheDispatcher的核心了。
while (true) { try { // Get a request from the cache triage queue, blocking until // at least one is available. final Request<?> request = mCacheQueue.take(); request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it. if (request.isCanceled()) { request.finish("cache-discard-canceled"); continue; } // Attempt to retrieve this item from cache. Cache.Entry entry = mCache.get(request.getCacheKey()); if (entry == null) { request.addMarker("cache-miss"); // Cache miss; send off to the network dispatcher. mNetworkQueue.put(request); continue; } // If it is completely expired, just send it to the network. if (entry.isExpired()) {//推断是否失效 request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; } // We have a cache hit; parse its data for delivery back to the request. request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) { // Completely unexpired cache hit. Just deliver the response. mDelivery.postResponse(request, response); } else { // Soft-expired cache hit. We can deliver the cached response, // but we need to also send the request to the network for // refreshing. request.addMarker("cache-hit-refresh-needed"); request.setCacheEntry(entry); // Mark the response as intermediate. response.intermediate = true; // Post the intermediate response back to the user and have // the delivery then forward the request along to the network. mDelivery.postResponse(request, response, new Runnable() { @Override public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Not much we can do about this. } } }); } } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } }
线程通过while true的方式进行轮询,当然因为queue是堵塞的,因此不会造成费电问题。
Cache.Entry entry = mCache.get(request.getCacheKey());获得数据的时候假设数据存在,则会将真实数据读取出来。这就是Volley的LazyLoad。
if (entry.isExpired()) {//推断是否失效 request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; }
这段代码从时效性来推断是否进行淘汰。我们回想下刚才所示代码,request在不同的上下文中总被标记为不同的状态,这对后期维护有及其重要的意义。
同一时候,为了保证接口的统一性,CacheDispatcher将自己的结果伪装成为NetResponse。
这样对外部接口来说,不论你採用的是那种方式获得数据,对我来说都当作网络来获取,这本身也是DAO模式存在的意义之中的一个。
request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( <strong><span style="color:#006600;">new NetworkResponse</span></strong>(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed");
request.parseNetworkResponse的目的是为了让你的request转成自己的数据对象。好了,到如今,对于Cache来说就差分发了,数据已经全然准备就绪了。我们上一讲说道Request终于会抛给Delivery对象用来异步分发,这样能有效避免分发造成的线程堵塞。我刚才说了,Cache会伪装成为Netresponse来post数据,那也就是说对于Network的处理,这些部分也是一模一样的。
因此,后一篇关于NetworkDispatcher的管理我将省略掉这些。Volley中Delivery的实现类是:
com.android.volley.ExecutorDelivery.java
public ExecutorDelivery(final Handler handler) { // Make an Executor that just wraps the handler. mResponsePoster = new Executor() { @Override public void execute(Runnable command) { handler.post(command); } }; }我们看到在它的<init>中传入了一个Handler,这个Handler假设是UI线程的Handler。那么你的线程就是在UI线程中执行,避免了你自己post UI线程消息的问题。post出来数据将被封装成为ResponseDeliveryRunnable 命令。
这样的命令跑在Handler所在的线程中.到此CacheDispatcher的基本流程就结束了,ResponseDeliveryRunnable中除了分发以外也会进行一些收尾的工作,看官们能够自己阅读。