zoukankan      html  css  js  c++  java
  • 数据准备和特征工程

    数据准备和特征工程

    1.感知数据

    1-1文件中的数据

    1.1.1CSV文件

    pd.read_csv(csv_file, index_col=0)

    index_col=1默认读取数据的第一列是索引

    df_new.to_csv("work/files/ten_bicycle.csv")

    保存成csv文件

    1.1.2Excel文件

    jiangsu = pd.read_excel("/home/aistudio/data/data20465/jiangsu.xls")
    jiangsu.to_excel('work/files/jiangsu.xlsx')
    cpi.drop([11, 12], axis=0, inplace=True)

    删除第11、12行,并覆盖原来的

    cpi.reset_index(drop=True, inplace=True)

    重置索引

    cpi.columns.rename('', inplace=True)

    列名重命名

    for column in cpi.columns[:-1]:
       cpi[column] = pd.to_numeric(cpi[column])
    cpi.dtypes

    将数据转换为数字

    ax.boxplot(js['population'], showmeans=True)

    画箱线图并显示均值

    1.1.3图形文件

    from PIL import Image    
    color_image = Image.open("work/images/laoqi.png")

    读取图片1

    gray_image = Image.open("work/images/laoqi.png").convert("L")

    彩色图像转灰度图

    convert()是图像实例对象的一个方法,接受一个 mode 参数,用以指定一种色彩模式

    1 ------------------(1位像素,黑白,每字节一个像素存储)

    L ------------------(8位像素,黑白)

    P ------------------(8位像素,使用调色板映射到任何其他模式)

    RGB------------------(3x8位像素,真彩色)

    RGBA------------------(4x8位像素,带透明度掩模的真彩色)

    CMYK--------------------(4x8位像素,分色)

    YCbCr--------------------(3x8位像素,彩色视频格式)

    I-----------------------(32位有符号整数像素)

    F------------------------(32位浮点像素)

    import numpy as np
    color_array = np.array(color_image)
    color_array.shape
    输出:(407, 396, 4)

    将彩色图片转为np矩阵

    gray_array = np.array(gray_image)
    gray_array.shape
    输出:(407, 396)

    将灰色图片转为np矩阵

    import cv2    
    img = cv2.imread('work/images/laoqi.png', 0)

    读取图片2(常用)

    plt.imshow(img, cmap = 'gray', interpolation = 'bicubic')

    显示图片

    from PIL import Image
    Image.fromarray(img)

    实现array到image的转换

    part_img = img[50:260, 100:280]

    裁剪图片

    reverse_img = 255 - img    
    Image.fromarray(reverse_img)

    负片

    part1 = img1[50:260, 100:280]
    part2 = img2[300:, 100:280]
    new_img = np.vstack((part1, part2))

    拼接两张图片

    1-2数据库中的数据(可不看)

    import pandas as pd
    import pymysql
    mydb = pymysql.connect(host="localhost",
                           user='root',
                           password='1q2w3e4r5t',
                           db="books",
                          )
    #连接数据库
    cursor = mydb.cursor()
    
    path = "/Users/qiwsir/Documents/Codes/DataSet"
    df = pd.read_csv(path + "/jiangsu/cities.csv")
    #插入数据
    sql = 'insert into city (name, area, population, longd, latd) 
    values ("%s","%s", "%s", "%s", "%s")'
    for idx in df.index:
        row = df.iloc[idx]
        cursor.execute(sql % (row['name'], row['area'], row['population'], row['longd'], row['latd']))#进行sql操作
    mydb.commit()#关闭连接
    sql_count = "SELECT COUNT(1) FROM city"
    cursor.execute(sql_count)
    n = cursor.fetchone()    # 获得一个返回值
    n
    sql_columns = 'SELECT name, area FROM city'
    cursor.execute(sql_columns)
    cursor.fetchall()
    
    #以area字段值从大到小查询全部记录;
    sql_sort = "SELECT * FROM city ORDER BY area DESC"
    cursor.execute(sql_sort)
    cursor.fetchall()
    #更简便的写法
    import pandas as pd
    import pymysql
    mydb = pymysql.connect(host="localhost",
                           user='root',
                           password='1q2w3e4r5t',
                           db="books",)
    cities = pd.read_sql_query("Select * FROM city", con=mydb, index_col='id')
    cities

    1-3网页上的数据(可不看)

    1-4来自API的数据(可不看)

    2数据清理

    2-0基本概念

    import pandas as pd
    df = pd.read_csv("/home/aistudio/data/data20505/pm2.csv")
    df.sample(10)
    df.shape
    df.info()
    df.dtypes

    2-1转化数据类型

    import pandas as pd
    df = pd.DataFrame([{'col1':'a', 'col2':'1'}, 
                               {'col1':'b', 'col2':'2'}]) #类似字典,df.dtypes是object
    s = pd.Series(['1', '2', '4.7', 'pandas', '10']) #类似列表
    df['col2-int'] = df['col2'].astype(int) #将数值转换为int类型
    s.astype(float, errors='ignore')#忽略错误的参数
    pd.to_numeric(s, errors='coerce')#可以将无效值强制转换为NaN
    pd.to_datetime(df[['Month', 'Day', 'Year']])#将数据转换成时间
    #替换数据
    def convert_money(value):
        new_value = value.replace("$","").replace(",","") 
        return float(new_value)
    
    df['2016'].apply(convert_money) 
    #替换数据2
    df['Percent Growth'].apply(lambda x: float(x.replace("%", "")) / 100)
    np.where(df['Active']=='Y', 1, 0) #条件查找,满足输出1,不满足输出0
    bras['creationTime'].str.split().apply(pd.Series, 0)#将axis=0字符分割并转换成pd.Series
    bras['productColor'].str.findall("[u4E00-u9FFF]+").str[0]#正则表达式匹配
    bras2.str.findall("[a-zA-Z]+").str[0]
    bras2 = bras['productSize'].str.upper()#转换成大写字母

    2-2处理重复数据

    df.duplicated('Age', keep='last')#保留重复数据的后一个,返回:指定列重复行boolean Series
    df.drop_duplicates('Age', keep='last')# 返回:副本或替代
    df[df.duplicated()].count() / df.count() #查看重复数据所占比例
    输出:Name     0.142857
    Age      0.142857
    Score    0.142857
    dtype: float64
    df.duplicated().any() #查看是否有重复数据
    输出:True

    2-3处理缺失数据

    hitters.isna().any() #查看是否有缺失数据
    hitters.isnull().sum()
    (hitters.shape[0] - hitters.count()) / hitters.shape[0] #查看缺失数据比例
    df.dropna(axis=0, how='all')    # how声明删除条件
    df.dropna(thresh=2)    # 非缺失值小于2的删除
    df['ColA'].fillna(method='bfill') #用指定值填补缺失数据
    pdf2 = persons.sample(20)
    pdf2['Height-na'] = np.where(pdf2['Height'] % 5 == 0, np.nan, pdf2['Height'])    # 制造缺失值
    
    from sklearn.impute import SimpleImputer
    imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') #用均值替换缺失值   
    col_values = imp_mean.fit_transform(pdf2['Height-na'].values.reshape((-1, 1)))    
    col_values
    
    #使用固定值替换缺失值
    imp = SimpleImputer(missing_values=-1, strategy='constant', fill_value=110) 
    imp.fit_transform(df['price'].values.reshape((-1, 1)))
    #根据规律填补缺失值1
    df = pd.DataFrame({"one":np.random.randint(1, 100, 10), 
                       "two": [2, 4, 6, 8, 10, 12, 14, 16, 18, 20],
                      "three":[5, 9, 13, np.nan, 21, np.nan, 29, 33, 37, 41]})
    
    
    from sklearn.linear_model import LinearRegression   
    
    df_train = df.dropna()    #训练集
    df_test = df[df['three'].isnull()]    #测试集
    
    regr = LinearRegression()
    regr.fit(df_train['two'].values.reshape(-1, 1), df_train['three'].values.reshape(-1, 1))    
    df_three_pred = regr.predict(df_test['two'].values.reshape(-1, 1))   
    
    # 将所得数值填补到原数据集中
    df.loc[(df.three.isnull()), 'three'] = df_three_pred
    df
    
    #根据规律填补缺失值2
    from sklearn.datasets import load_iris    # 引入鸢尾花数据集
    import numpy as np
    
    iris = load_iris()
    X = iris.data
    # 制造含有缺失值的数据集
    rng = np.random.RandomState(0)
    X_missing = X.copy()
    mask = np.abs(X[:, 2] - rng.normal(loc=5.5, scale=0.7, size=X.shape[0])) < 0.6
    X_missing[mask, 3] = np.nan    # X_missing是包含了缺失值的数据集
    
    from missingpy import KNNImputer    # 引入KNN填充缺失值的模型
    imputer = KNNImputer(n_neighbors=3, weights="uniform")
    X_imputed = imputer.fit_transform(X_missing)

    2-4处理离群数据

     

     

     

    %matplotlib inline
    import pandas as pd
    import matplotlib.pyplot as plt
    
    df = pd.read_csv("/home/aistudio/data/data20510/experiment.csv", index_col=0)
    
    fig, ax = plt.subplots()
    ax.scatter(df['alpha'], df['belta']) #通过散点图查看离散值
    sns.boxplot(x="day", y="tip", data=tips, palette="Set3")#通过箱线图查看离散值
    #箱线图和散点图结合查看离散值
    ax = sns.boxplot(x="day", y="tip", data=tips)
    ax = sns.swarmplot(x="day", y="tip", data=tips, color=".25")  
    #通过箱线图去除离群值
    percentlier = boston_df.quantile([0, 0.25, 0.5, 0.75, 1], axis=0)   
    IQR = percentlier.iloc[3] - percentlier.iloc[1] #箱线图里矩形的高度
    Q1 = percentlier.iloc[1]    #下四分位
    Q3 = percentlier.iloc[3]    #上四分位
    (boston_df < (Q1 - 1.5 * IQR)).any() #上限
    (boston_df > (Q3 + 1.5 * IQR)).any() #下限
    boston_df_out = boston_df[~((boston_df < (Q1 - 1.5 * IQR)) |(boston_df > (Q3 + 1.5 * IQR))).any(axis=1)] #去掉离群值
    boston_df_out.shape

    四分位数(Quartile),即统计学中,把所有数值由小到大排列并分成四等份,处于三个分割点位置的得分就是四分位数。

    第一四分位数 (Q1),又称“较小四分位数”,等于该样本中所有数值由小到大排列后第25%的数字。

    第二四分位数 (Q2),又称“中位数”,等于该样本中所有数值由小到大排列后第50%的数字。

    第三四分位数 (Q3),又称“较大四分位数”,等于该样本中所有数值由小到大排列后第75%的数字。

    第三四分位数与第一四分位数的差距又称四分位距(InterQuartile Range,IQR)。

    首先确定四分位数的位置:

    Q1**的位置= (n+1) × 0.25**

    Q2**的位置= (n+1) × 0.5**

    Q3**的位置= (n+1) × 0.75**

    n表示项数

    对于四分位数的确定,有不同的方法,另外一种方法基于N-1 基础。即

    Q1的位置=(n-1)x 0.25

    Q2的位置=(n-1)x 0.5

    Q3的位置=(n-1)x 0.75

    #通过正态分布去除离群值
    # 计算z值
    from scipy import stats    #统计专用模块
    import numpy as np
    rm = boston_df['RM']
    z = np.abs(stats.zscore(rm))    
    st = boston_df['RM'].std()    
    st
    
    threshold = 3 * st   #阈值,不是“阀值”
    print(np.where(z > threshold))    # ⑤
    输出:(array([ 97,  98, 162, 163, 166, 180, 186, 195, 203, 204, 224, 225, 226,
           232, 233, 253, 257, 262, 267, 280, 283, 364, 365, 367, 374, 384,
           386, 406, 412, 414]),)
    
    rm_in = rm[(z < threshold)]    # 消除离群值
    rm_in.shape
    输出:(476,)

    3特征变换

    3-1特征数值化

    df.replace({"N": 0, 'Y': 1}) #直接替换
    from sklearn.preprocessing import LabelEncoder #自动转换
    le = LabelEncoder()
    le.fit_transform(df['hypertension'])
    
    le.inverse_transform([0, 1, 1, 2, 1, 0]) #将标准化后的数据转换为原始数据
    import re	#用词频统计进行转换
    d1 = "I am Laoqi. I am a programmer."
    d2 = "Laoqi is in Soochow. It is a beautiful city."
    words = re.findall(r"w+", d1+d2)    # 以正则表达式提炼单词,不是用split(),这样就避免了句点问题
    
    words = list(set(words))    # 唯一单词保存为列表
    [w.lower() for w in words]
    words
    
    # 为每句话中的单词出现次数计数
    def count_word(document, unique_words):
        count_doc = []
        for word in unique_words:
            n = document.lower().count(word)
            count_doc.append(n)
        return count_doc
    
    count1 = count_word(d1, words)
    count2 = count_word(d2, words)
    print(count1)
    print(count2)
    
    # 保存为dataframe
    df = pd.DataFrame([count1, count2], columns=words, index=['d1', 'd2'])
    df
    from sklearn.feature_extraction.text import CountVectorizer #使用自带的库进行词频统计
    count_vect = CountVectorizer()
    tf1 = count_vect.fit_transform([d1, d2])
    tf1.shape
    输出:(2, 9)
    
    count_vect.get_feature_names()  # 相对前面方法少了2个,因为I 和 a作为常用词停词了。
    输出:['am', 'beautiful', 'city', 'in', 'is', 'it', 'laoqi', 'programmer', 'soochow']
    
    tf1.toarray()    # 显示记录数值
    输出:array([[2, 0, 0, 0, 0, 0, 1, 1, 0],
           [0, 1, 1, 1, 2, 1, 1, 0, 1]])

    3-2特征二值化

    #阈值将数值型转变为二进制型,阈值可以进行设定,另外只能对数值型数据进行处理,且传入的参数必须为2D数组,也就是不能是Series这种类型,shape为(m,n)而不是(n,)类型的数组
    from sklearn.preprocessing import Binarizer
    bn = Binarizer(threshold=pm25["Exposed days"].mean())    # ①
    result = bn.fit_transform(pm25[["Exposed days"]])   # ②
    pm25['sk-bdays'] = result
    pm25.sample(10)
    from sklearn.preprocessing import binarize
    fbin = binarize(pm25[['Exposed days']], threshold=pm25['Exposed days'].mean())
    fbin[[1, 50, 100, 150, 200]]

    图片部分(略)

    3-3One-Hot编码

    pd.get_dummies(g) #pandas提供对one-hot编码的函数
    persons.merge(df_dum, left_index=True, right_index=True) #组合数据
    from sklearn.preprocessing import OneHotEncoder
    ohe = OneHotEncoder()
    fs = ohe.fit_transform(df[['color']])
    fs_ohe = pd.DataFrame(fs.toarray()[:, 1:], columns=["color_green", 'color_red'])
    df = pd.concat([df, fs_ohe], axis=1)
    df
    输出:
       color  size  price classlabel  color_green  color_red
    0  green     1   29.9     class1          1.0        0.0
    1    red     2   69.9     class2          0.0        1.0
    2   blue     3   99.9     class1          0.0        0.0
    3    red     2   59.9     class1          0.0        1.0
    from sklearn.preprocessing import LabelEncoder
    from sklearn.preprocessing import OneHotEncoder
    import numpy as np
    encoded_x = None
    for i in range(0, X.shape[1]):
        label_encoder = LabelEncoder()    # 数值化
        feature = label_encoder.fit_transform(X[:,i])
        feature = feature.reshape(X.shape[0], 1)
        onehot_encoder = OneHotEncoder(sparse=False)    # OneHot编码
        feature = onehot_encoder.fit_transform(feature)
        if encoded_x is None:
            encoded_x = feature
        else:
            encoded_x = np.concatenate((encoded_x, feature), axis=1)
    print("X shape: : ", encoded_x.shape)

    3-4数据变换

    #将数据由非郑态分布转换为正态分布常用的方法
    data['logtime'] = np.log10(data['time']) #方法一
    
    from scipy import stats
    dft = stats.boxcox(transform)[0]  #方法二
    
    from sklearn.preprocessing import power_transform
    dft2 = power_transform(dc_data[['AIR_TIME']], method='box-cox') 
    #使用sklearn.preprocessing.PolynomialFeatures来进行特征的构造
    from sklearn.preprocessing import PolynomialFeatures    # ③
    poly = PolynomialFeatures(2)    # ④
    poly.fit_transform(X)
    原始数据:
    array([[0, 1],
           [2, 3],
           [4, 5]])
    构造特征后的数据:
    array([[ 1.,  0.,  1.,  0.,  0.,  1.],
           [ 1.,  2.,  3.,  4.,  6.,  9.],
           [ 1.,  4.,  5., 16., 20., 25.]])
    #将数据从任意分布映射到尽可能接近高斯分布,以稳定方差和最小化偏度
    from sklearn.preprocessing import power_transform
    dft2 = power_transform(dc_data[['AIR_TIME']], method='box-cox')    
    hbcs = plt.hist(dft2, bins=100)
    #为了简化构建变换和模型链的过程,Scikit-Learn提供了pipeline类,可以将多个处理步骤合并为单个Scikit-Learn估计器
    %matplotlib inline
    import pandas as pd
    import matplotlib.pyplot as plt
    
    from sklearn.linear_model import Ridge
    from sklearn.preprocessing import PolynomialFeatures
    from sklearn.pipeline import make_pipeline
    
    df = pd.read_csv("/home/aistudio/data/data20514/xsin.csv")
    colors = ['teal', 'yellowgreen', 'gold']
    plt.scatter(df['x'], df['y'], color='navy', s=30, marker='o', label="training points")
    
    for count, degree in enumerate([3, 4, 5]):
        model = make_pipeline(PolynomialFeatures(degree), Ridge())    # ③
        model.fit(df[['x']], df[['y']])
        y_pre = model.predict(df[['x']])
        plt.plot(df['x'], y_pre, color=colors[count], linewidth=2,
                 label="degree %d" % degree)
    
    plt.legend()

    3-5特征离散化

    #无监督离散等分分箱
    pd.cut(ages['years'],3) #可添加参数如:bins=[9, 30, 50],labels=[0, 1, 2]
    输出:
    0    (9.943, 29.0]
    1    (9.943, 29.0]
    2     (29.0, 48.0]
    3     (48.0, 67.0]
    4     (48.0, 67.0]
    5     (29.0, 48.0]
    6     (29.0, 48.0]
    Name: years, dtype: category
    Categories (3, interval[float64]): [(9.943, 29.0] < (29.0, 48.0] < (48.0, 67.0]] #分成三部分
    pd.qcut(ages['years'],3) #与cut类似                                 
    #无监督离散2
    from sklearn.preprocessing import KBinsDiscretizer
    kbd = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform')   #n_bins=3:划分区间个数、encode='ordinal'编码方式:整数数值、strategy='uniform'离散化采用的特质是分区的宽度相同 
    trans = kbd.fit_transform(ages[['years']])    
    ages['kbd'] = trans[:, 0]    
    ages
    #有监督离散化
    import entropy_based_binning as ebb
    A = np.array([[1,1,2,3,3], [1,1,0,1,0]])
    ebb.bin_array(A, nbins=2, axis=1)
    输出:array([[0, 0, 1, 1, 1],
           [1, 1, 0, 1, 0]])
    #有监督离散化2
    from mdlp.discretization import MDLP
    from sklearn.datasets import load_iris
    transformer = MDLP()
    iris = load_iris()
    X, y = iris.data, iris.target
    X_disc = transformer.fit_transform(X, y)
    X_disc

    3-6数据规范化

     

     

     

    from sklearn import datasets
    from sklearn.preprocessing import StandardScaler #标准化
    iris = datasets.load_iris()
    iris_std = StandardScaler().fit_transform(iris.data) 
    from sklearn.preprocessing import MinMaxScaler #最小最大区间化
    iris_mm = MinMaxScaler().fit_transform(iris.data)    
    iris_mm[:5]
    from sklearn.preprocessing import RobustScaler, MinMaxScaler #RobustScaler基于原始数据的均值和标准差进行的标准化
    robust = RobustScaler()
    robust_scaled = robust.fit_transform(X)
    robust_scaled = pd.DataFrame(robust_scaled, columns=['x1', 'x2'])
    from sklearn.preprocessing import Normalizer #归一化 可添加参数norm='l1'、norm='max'
    norma = Normalizer()    
    norma.fit_transform([[3, 4]])
    array([[0.6, 0.8]])

    4特征选择

    4-0特征选择概述

    from sklearn.model_selection import train_test_split #分割数据集
    from sklearn.preprocessing import StandardScaler
    X, y = df_wine.iloc[:, 1:], df_wine.iloc[:, 0].values
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0, stratify=y)
    
    std = StandardScaler()
    X_train_std = std.fit_transform(X_train)
    X_test_std = std.fit_transform(X_test)

    4-1封装器法

    #循序特征选择
    from mlxtend.feature_selection import SequentialFeatureSelector as SFS
    X_train, X_test, y_train, y_test= train_test_split(X, y, 
                                                       stratify=y,
                                                       test_size=0.3,
                                                       random_state=1)
    std = StandardScaler()
    X_train_std = std.fit_transform(X_train)
    
    knn = KNeighborsClassifier(n_neighbors=3)    # ①
    sfs = SFS(estimator=knn,     # ②
               k_features=4,
               forward=True, 
               floating=False, 
               verbose=2,
               scoring='accuracy',
               cv=0)
    sfs.fit(X_train_std, y_train)
    #穷举特征选择
    from mlxtend.feature_selection import ExhaustiveFeatureSelector as EFS
    efs = EFS(RandomForestRegressor(),min_features=1,max_features=5,scoring='r2',n_jobs=-1)    
    efs.fit(np.array(mini_data),y_train)
    mini_data.columns[list(efs.best_idx_)]
    #穷举特征选择2
    from mlxtend.feature_selection import ExhaustiveFeatureSelector  
    from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier  
    from sklearn.metrics import roc_auc_score
    
    feature_selector = ExhaustiveFeatureSelector(RandomForestClassifier(n_jobs=-1),    
               min_features=2,
               max_features=4,
               scoring='roc_auc',
               print_progress=True,
               cv=2)
    features = feature_selector.fit(np.array(train_features.fillna(0)), train_labels)  
    filtered_features= train_features.columns[list(features.best_idx_)]  
    filtered_features
    #递归特征消除
    from sklearn.feature_selection import RFE
    rfe = RFE(RandomForestRegressor(), n_features_to_select=5)     
    rfe.fit(np.array(mini_data),y_train)
    rfe.ranking_

    4-2过滤器法

    #方法一
    from sklearn.datasets import load_iris
    from sklearn.feature_selection import SelectKBest    # ①
    from sklearn.feature_selection import chi2    
    iris = load_iris()
    X, y = iris.data, iris.target
    skb = SelectKBest(chi2, k=2)    # ②
    result = skb.fit(X, y)    # ③
    #方法二
    from sklearn.feature_selection import VarianceThreshold 
    vt = VarianceThreshold(threshold=(0.8 * (1 - 0.8)))    # ⑤
    vt.fit_transform(X)

    4-3嵌入法

    # 用嵌入法选择特征
    from sklearn.feature_selection import SelectFromModel
    from sklearn.linear_model import LogisticRegression    #使用logistic回归模型
    
    embeded_lr_selector = SelectFromModel(LogisticRegression(penalty="l1"), '1.25*median')
    embeded_lr_selector.fit(X_norm, y)
    
    embeded_lr_support = embeded_lr_selector.get_support()
    embeded_lr_feature = X.loc[:,embeded_lr_support].columns.tolist()
    print(str(len(embeded_lr_feature)), 'selected features')

    可以看下实例了解

    5特征抽取

    5-1无监督特征抽取

    #主成分分析
    from sklearn.decomposition import PCA
    import numpy as np
    pca = PCA()    # ①
    X_pca = pca.fit_transform(X)    # ②
    np.round(X_pca[: 4], 2)  
    #因子分析
    from sklearn.decomposition import FactorAnalysis
    fa = FactorAnalysis(n_components=2)
    iris_two = fa.fit_transform(iris.data)
    iris_two[: 4]

    5-2有监督特征抽取

    from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
    lda = LinearDiscriminantAnalysis(n_components=2)
    X_lda = lda.fit_transform(X, y)
    plt.scatter(X_lda[:, 0], X_lda[:, 1], c=y)
    小石小石摩西摩西的学习笔记,欢迎提问,欢迎指正!!!
  • 相关阅读:
    大纲锤炼:深入浅出WF4.0
    谈谈技术面试 A Lazy Programmer's Footprint
    推荐一本写给IT项目经理的好书
    UML成长笔记
    写于Silverlight整装待发之际【瞿杰】
    虛擬化、iPad/iPhone相關運用、攝影方面
    (2)经典导读:(webabcd)的Silverlight文章索引
    (1)经典导读:坚持学习WF文章索引[carysun]
    索引测试
    js导航条 二级滑动 模仿块级作用域
  • 原文地址:https://www.cnblogs.com/shijingwen/p/13700468.html
Copyright © 2011-2022 走看看