以为对read.table/read.delim
很熟了,谁知又掉坑里了。
我有个3万多行的数据集,包括样品表达量和注释信息。大概长这样:
本来3万多行,可是读进来的时候变成了1万多行,而且read.delim和read.table减少的行数还不一样。我用Excel打开,再另存为txt格式读入后,数据行数变回正常的3万多。
MP <- read.delim("combine_test.txt",sep = '\t',header = T)
MP1 <- read.table("combine_test.txt",sep = '\t',header = T)
MP2<- read.delim("new_combine_test.txt",sep = '\t',header = T)
所以我在想是不是Rstudio的问题。于是我在Linux中测试了下,发现更诡异。
MP <- read.table("combine_test2.txt",header = T,sep='\t')
dim(MP)
MP2 <- read.delim("combine_test2.txt",header = T,sep='\t')
dim(MP2)
write.table(MP,"out.txt",col.names=T,row.names=F,sep='\t',quote=F)
write.table(MP2,"out.txt",col.names=T,row.names=F,sep='\t',quote=F)
dim显示的都是1万多行,原样输出的数据却有3万多行!
我意识到是数据格式的问题了。用readr来试试:
MP2 <- as.data.frame(read_delim("combine_test.txt",delim = '\t'))
变回正常了。难道base R
还不如tidyverse
吗???我在网上查了查,终于找到原因了,那就是一个quote
参数的事情。
MP3 <- read.table("combine_test.txt",sep = '\t',quote = "",header = T)
MP4 <- read.delim("combine_test.txt",sep = '\t',quote = "",header = T)
关于quote
参数,那个答案是这么解释的:
Explanation: Your data has a single quote on 59th line (( pyridoxamine 5'-phosphate oxidase (predicted)). Then there is another single quote, which complements the single quote on line 59, is on line 137 (5'-hydroxyl-kinase activity...). Everything within quote will be read as a single field of data, and quotes can include the newline character also. That's why you lose the lines in between. quote = "" disables quoting altogether.
简单理解就是我的数据里面包含了单引号''
,两个单引号之间会当成一个字段来处理,我需要提前用quote=""
将字段引起来。我检查了下,在我的KEGG的描述中确实含有引号。
如果字段字符串中本身含有双引号""
或者其他符号时,也可能出错。为检查这种错误,可以用count.fields
来统计每行的字段数,如果出现NA,则说明读入的数据有误。
num.fields = count.fields("combine_test.txt", sep="\t")
num.fields = count.fields("combine_test.txt", sep="\t",quote = "")
貌似read.csv
不会出现这种问题,因为它提前引起来了。可见read.table确实有意想不到的错误发生。多了解下fread
和readr
系列吧。