今天在用Streaming-Python处理一个MapReduce程序时,发现reducer失败,原由于耗费内存达到极限了。细致查看代码时,发现有一个集合里保存着URL,而URL长度是比較长的,直接保存确实是耗费内存,于是想到用压缩存储,然后用的时候再解压,尽管处理时间添加。可是耗费内存大大减少!
详细就是使用zlib模块
import zlib raw_data = "hello,world,ooooooooooooxxxxxxxxxxx" zb_data = zlib.compress(raw_data) print "len(raw_data)=%d, len(zb_data)=%d, compression ratio=%.2f" % (len(raw_data), len(zb_data), float(len(zb_data))/len(raw_data)) # len(raw_data)=35, len(zb_data)=25, compression ratio=0.71 raw_data2 = zlib.decompress(zb_data) print raw_data2
假设存在网络传输。上面的方法可能失效;比如我跑了一个MapReduce,mapper中压缩,reducer中解压,结果报错:
Traceback (most recent call last): File "/hadoop/yarn/local/usercache/lming_08/appcache/application_1415110953023_46173/container_1415110953023_46173_01_000018/./build_visitor_company_ulti_info_red.py", line 25, in <module> urllist += zlib.decompress(urlitem) + "" zlib.error: Error -3 while decompressing data: incorrect header check log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.临时还没找到有效办法。