zoukankan      html  css  js  c++  java
  • 基于深度学习的16QAM解调

    1 深度学习解调思路

    • 对于 (M)-QAM 解调而言,可以将其看成一个多分类问题,比如 16QAM 的解调就可以看成一个 16 个类别的分类问题。每次将接收到的信号作为神经网络的输入,神经网络的输出将其分类为(M)个调制符号中的一个,从而实现 (M)-QAM 的解调。

    • 由于这个 16QAM 的解调问题并不复杂,将采用四层的全连接神经网络加上 softmax 激活函数来实现 AWGN 信道下 QAM 的解调。其中输入层、两层隐藏层的激活函数都为 ReLu 函数,网络节点数分别是2、 40、80、16,输出层的激活函数是 softmax 函数。

    • 在 QAM 解调这个多分类问题中,损失函数采用分类问题中常用的交叉熵(cross entropy)损失函数。

    由于神经网络的输入只能是实数,所以要把接收到的复信号拆成实部和虚部送进神经网络,所以输入层维度是2。

    2 仿真结果

    首先随机生成 10000 组符号,经过AWGN信道后通过传统解调方法进行解调,对仿真的结果进行取平均,并且与 16QAM 的理论误码率进行对比,可以发现与理论误码率基本重合,结果如下图所示:

    对于深度学习方法,首先随机生成 5000 组符号数据,经过高斯白噪声信道后,得到在不同信噪比下的接收信号,以此作为神经网络的训练集。并且将传统解调方法中所使用的10000组符号作为测试集,来验证训练好的神经网络性能。两种方法的性能对比如下图所示:

    3 仿真代码

    本实验程序建立在 Python3.7.9,Tensorflow2.0.0 的环境基础之上,源码详见Github

    3.1 Traditional Method

    #%%
    import numpy as np
    from matplotlib import pyplot as plt
    import matplotlib.ticker as ticker
    import scipy.io as sio
    #%%
    nSymbol = 10000
    cpNum = nSymbol // 4
    TotalSymbol = nSymbol + cpNum
    SNR = list(range(0, 20))
    M = 16
    
    # 生成发送数据
    data_scource = np.random.randint(0, 2, [nSymbol, 4])
    
    # 映射成16QAM符号
    mapBitToSymbol = {
    	(0, 0, 0, 0) : [-3+3*1j, 0],
    	(0, 0, 0, 1) : [-3+1*1j, 1],
    	(0, 0, 1, 0) : [-3-3*1j, 2],
    	(0, 0, 1, 1) : [-3-1*1j, 3],
    	(0, 1, 0, 0) : [-1+3*1j, 4],
    	(0, 1, 0, 1) : [-1+1*1j, 5],
    	(0, 1, 1, 0) : [-1-3*1j, 6],
    	(0, 1, 1, 1) : [-1-1*1j, 7],
    	(1, 0, 0, 0) : [3+3*1j, 8],
    	(1, 0, 0, 1) : [3+1*1j, 9],
    	(1, 0, 1, 0) : [3-3*1j, 10],
    	(1, 0, 1, 1) : [3-1*1j, 11],
    	(1, 1, 0, 0) : [1+3*1j, 12],
    	(1, 1, 0, 1) : [1+1*1j, 13],
    	(1, 1, 1, 0) : [1-3*1j, 14],
    	(1, 1, 1, 1) : [1-1*1j, 15],
    }
    
    data_send = []
    data_send_index = []
    for i in range(nSymbol):
    	data_send.append(mapBitToSymbol[tuple(data_scource[i])][0]) 	   #调制后的符号
    	data_send_index.append(mapBitToSymbol[tuple(data_scource[i])][1])   #符号索引
    
    data_ifft = np.fft.ifft(data_send)		#变换到时域
    data_ofdm_send = np.hstack([data_ifft[-cpNum:], data_ifft])
    #%%
    Es = np.linalg.norm(data_ofdm_send) ** 2 / TotalSymbol	#求每个符号的能量
    Eb = Es / np.log2(M)					#求每个比特的能量
    Pe_simu = []
    test_data = []
    test_label = data_send_index
    
    for snrdB in SNR:
        snr = 10 ** (snrdB / 10.0)
        sigma = Eb / snr
        noise = np.sqrt(sigma/2) * np.random.randn(1, TotalSymbol) + 
        np.sqrt(sigma/2) * np.random.randn(1, TotalSymbol)*1j
        data_ofdm_receive = data_ofdm_send + noise
        data_fft = data_ofdm_receive[0, cpNum : cpNum+nSymbol]
        data_receive = np.fft.fft(data_fft)
        test_data.append(np.concatenate((np.real(data_receive).reshape(-1, 1), 
                                         np.imag(data_receive.reshape(-1, 1))), axis=-1))
    	
    
        TotalErrorSymbol = 0
        #data_receive_index = []
    
        for i in range(len(data_receive)):
            min_index = 0
            min_value = np.linalg.norm(mapBitToSymbol[(0, 0, 0, 0)][0] - data_receive[i])
            
            for bit, (symbol, index) in mapBitToSymbol.items():
                error_value = np.linalg.norm(symbol - data_receive[i])
                if error_value < min_value:
                    min_index = index
                    min_value = error_value
            
            # 统计错误符号个数
            if min_index != data_send_index[i]:
                TotalErrorSymbol += 1
        #print(TotalErrorSymbol)
        Pe_simu.append(TotalErrorSymbol / nSymbol)
    
    #%%
    import math
    def Q(x):
        return math.erfc(x / math.sqrt(2)) / 2
    a= 4 * (1 - 1/math.sqrt(M)) / math.log2(M)
    k = math.log2(M)
    b = 3 * k / (M-1)
    Pe_theory = []
    
    # 计算理论误码率
    for snrdB in range(20):
        Pe_theory.append(a * Q(math.sqrt(b*10**(snrdB/10))) * math.log2(M))
    
    # 绘图
    snrdB = list(range(0, 20))
    plt.semilogy(snrdB, Pe_theory, 'r-.*')
    plt.semilogy(snrdB, Pe_simu, 'k-^')
    
    plt.grid(True, which='major')
    plt.grid(True, which='minor', linestyle='--')
    plt.xlabel('SNR(dB)')
    plt.ylabel('Symbol Error Rate')
    plt.legend(['Theory', 'Simulation'])
    plt.axis([0, 18, 10**-3, 10**0])
    plt.savefig('Theory_Tra_Compare.svg')
    plt.savefig('Theory_Tra_Compare.pdf')
    #%%
    # 保存数据
    sio.savemat('./Traditional_data.mat', 
                {'test_data':test_data,
                 'test_label':test_label,
                 'Pe_simu':Pe_simu}
                )
    

    3.2 Deep Learning Method

    #%%
    import numpy as np
    import tensorflow as tf
    from tensorflow.keras import models, layers, Input
    from tensorflow import keras
    import os
    os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
    import matplotlib.pyplot as plt
    import scipy.io as sio
    #%%
    nSymbol = 5000						#用于训练的符号
    cpNum = nSymbol // 4				        #循环前缀
    TotalSymbol = nSymbol + cpNum
    M = 16						       #调制阶数
    SNR = list(range(0, 20))		               #信噪比
    batch_size = 256					
    epochs = 20
    #%%
    # 生成发送数据
    data_scource = np.random.randint(0, 2, [nSymbol, 4])
    
    # 映射成16QAM符号
    mapBitToSymbol = {
    	(0, 0, 0, 0) : [-3+3*1j, 0],
    	(0, 0, 0, 1) : [-3+1*1j, 1],
    	(0, 0, 1, 0) : [-3-3*1j, 2],
    	(0, 0, 1, 1) : [-3-1*1j, 3],
    	(0, 1, 0, 0) : [-1+3*1j, 4],
    	(0, 1, 0, 1) : [-1+1*1j, 5],
    	(0, 1, 1, 0) : [-1-3*1j, 6],
    	(0, 1, 1, 1) : [-1-1*1j, 7],
    	(1, 0, 0, 0) : [3+3*1j, 8],
    	(1, 0, 0, 1) : [3+1*1j, 9],
    	(1, 0, 1, 0) : [3-3*1j, 10],
    	(1, 0, 1, 1) : [3-1*1j, 11],
    	(1, 1, 0, 0) : [1+3*1j, 12],
    	(1, 1, 0, 1) : [1+1*1j, 13],
    	(1, 1, 1, 0) : [1-3*1j, 14],
    	(1, 1, 1, 1) : [1-1*1j, 15],
    }
    
    data_send = []
    data_send_index = []
    for i in range(nSymbol):
    	data_send.append(mapBitToSymbol[tuple(data_scource[i])][0]) 	   #调制后的符号
    	data_send_index.append(mapBitToSymbol[tuple(data_scource[i])][1])  #符号索引
    
    data_ifft = np.fft.ifft(data_send)		#变换到时域
    data_ofdm_send = np.hstack([data_ifft[-cpNum:], data_ifft])
    
    #%% 
    #用不同的信噪比训练
    train_label_index = data_send_index * len(SNR)   #标签索引
    train_data_list = []
    #%%
    Es = np.linalg.norm(data_ofdm_send) ** 2 / TotalSymbol	#求每个符号的能量
    Eb = Es / np.log2(M)					#求每个比特的能量
    Pe_simu = []
    
    # 把不同信噪比下的接收信号加入到训练集中
    for snrdB in SNR:
    	snr = 10 ** (snrdB / 10.0)
    	sigma = Eb / snr
    	noise = np.sqrt(sigma/2) * np.random.randn(1, TotalSymbol) + 
    	np.sqrt(sigma/2) * np.random.randn(1, TotalSymbol)*1j
    	data_ofdm_receive = data_ofdm_send + noise
    	data_fft = data_ofdm_receive[0, cpNum : cpNum+nSymbol]
    	data_receive = np.fft.fft(data_fft)
    	train_data_list += list(data_receive)
    #%%
    # ================
    # 搭建网络
    # ================
    merged_inputs = Input(shape=(2,)) 
    temp = layers.Dense(40,activation='relu')(merged_inputs)
    temp = layers.BatchNormalization()(temp)
    temp = layers.Dense(80, activation='relu')(temp)
    temp = layers.BatchNormalization()(temp)
    out= layers.Dense(M, activation='softmax')(temp)
    model = models.Model(inputs=merged_inputs, outputs=out)
    model.compile(loss='categorical_crossentropy',
     optimizer=keras.optimizers.Adam(lr=0.001), metrics=['accuracy'])
    model.summary()
    
    #%%
    train_label_tf = tf.one_hot(train_label_index, depth=M)      #转换成one-hot类型数据
    train_data_tmp = np.array(train_data_list)
    data_real = np.real(train_data_tmp).reshape(-1, 1)           #拆成虚实, nSymbol*2列
    data_imag = np.imag(train_data_tmp).reshape(-1, 1)
    
    train_data = np.concatenate((data_real, data_imag), axis=-1)
    train_label = np.array(train_label_tf)
    
    # 打乱一下顺序,数据和标签要对应上
    state = np.random.get_state()
    np.random.shuffle(train_data)
    np.random.set_state(state)
    np.random.shuffle(train_label)
    
    history = model.fit(train_data, 
                        train_label, 
                        epochs=epochs, 
                        batch_size=batch_size, 
                        verbose=1, 
                        shuffle=True,
                        #分割一部分用于验证集
                        validation_split=0.2,
                        #callbacks=[checkpointer,early_stopping,reduce_lr]
    					)
    
    #%%
    loss = history.history['loss'] 
    val_loss = history.history['val_loss'] 
    acc = history.history['accuracy'] 
    val_acc = history.history['val_accuracy'] 
    epochs = range(1, len(loss) + 1) 
     
    plt.plot(epochs, acc, 'bo', label='Training accuracy') 
    plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
    plt.title('Training and validation loss') 
    plt.xlabel('Epochs') 
    plt.ylabel('Accuracy') 
    plt.legend() 
    # plt.show()
    plt.savefig('Accuracy.pdf')
    plt.savefig('Accuracy.svg')
    #%%
    test_origin_data = sio.loadmat('./Traditional_data.mat')
    test_origin_data.keys()
    #%%
    test_data = test_origin_data['test_data']
    test_label = test_origin_data['test_label']
    Total = len(test_label[0])	#总符号个数
    Pe_Tra_simu = test_origin_data['Pe_simu']
    #%%
    Pe_Deep_simu = []
    # data_predict = model.predict(test_data, verbose=1)
    for i in range(len(test_data)):
    	data_predict_OneHot = model.predict(test_data[i])
    	data_predict = np.argmax(data_predict_OneHot, axis=-1)
    	Pe_Deep_simu.append((data_predict != test_label).sum() / Total)
    
    #%%
    snrdB = list(range(0, 20))
    plt.semilogy(snrdB, Pe_Tra_simu[0], 'k-^')
    plt.semilogy(snrdB, Pe_Deep_simu, 'r-.*')
    
    plt.grid(True, which='major')
    plt.grid(True, which='minor', linestyle='--')
    plt.xlabel('SNR(dB)')
    plt.ylabel('Symbol Error Rate')
    plt.legend(['Traditional Method', 'Deep Learning'])
    plt.axis([0, 18, 10**-3, 10**0])
    plt.savefig('Deep_Tra_Compare.pdf')
    plt.savefig('Deep_Tra_Compare.svg')
    #%%
    

    4 参考资料

    1. OFDM在AWGN下的BER曲线Matlab实现详解
    2. Deep Learning-16QAM解调代码
    3. 智能通信:基于深度学习的物理层设计[M]. 科学出版社, 2020.
  • 相关阅读:
    快速幂
    hdu 1595 find the longest of the shortest(迪杰斯特拉,减去一条边,求最大最短路)
    hdu 4121 Xiangqi
    hdu 2224 The shortest path
    hdu4923
    矩阵模板
    hdu4570(区间dp)
    hdu1978(记忆化搜索)
    hdu4283(区间dp)
    hdu1160最长递减子序列及其路径
  • 原文地址:https://www.cnblogs.com/MayeZhang/p/14798516.html
Copyright © 2011-2022 走看看