Dun Pu’s research while affiliated with Tibet University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


The architecture of neural network encoder
All-connection model of generating neural network
The structure of output node
The structure of part-connection SOM neural network
The architecture of MLPN decoder

+2

The Design and Simulation of Neural Network Encoder in Confidential Communication Field
  • Article
  • Publisher preview available

October 2018

·

66 Reads

Wireless Personal Communications

Wei Xiao

·

·

Dun Pu

·

Both all-connection model and part connection model is simulated, which adopt self-organizing map neural network to generate check bits. Consequently, N source bits and K check bits are composed a complete codeword. In the decoding port, multi-layer perceptron network (MLPN) is utilized to implement decoding function. The specific steps are as follows: (1) Constructing the MLPN according to the size of codeword sets and source bits; (2) Training the MLPN with codeword sets generated by neural network decoder until qualified; (3) Accepting and decoding codeword sets via trained MLPN. Actual tests show that: (1) There exist no evident performance differences between all-connection model and part-connection model; (2) The connection of weight sets is similar to Tanner graph in part-connection model, which reduce the computational complex greatly and remain good performance at the same time. In sum, the method of encoding and decoding has certain market prospect in confidential communication field.

View access options

The Application of Optimal Weights Initialization Algorithm Based on K-L Transfrom in Multi-Layer Perceptron Networks

July 2013

·

26 Reads

·

3 Citations

Proceedings of SPIE - The International Society for Optical Engineering

The paper presents a novel method of initial weights optimization method in Multi-Layer Perceptron Network(MLPN). Firstly, the sample sets should be transformed by K-L Transform. Secondly, use K-L Converting Matrix to initialize the weights between input and hidden layer. Thirdly the MLPN is trained by BP algorithm, and the convergence speed of MLPN is improved evidently. The ultimate test shows the new algorithm is suitable for the situation of low-dimensional data.