Jaron Sanders’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Table 1 :
Figure 2: (a) Base 4 − 3 − 2 network, light circles indicate activations, boxes indicate post-activations. (b) Example for Design A with 2 layers as input copies to each subsequent layer. The light circles indicate the linear operations/matrix-vector products. The results of the linear operation is averaged (single solid-blue circle) and fed through the activation function, producing the multiple version of the layers output (boxes). (c) Example of Design B.
Figure 3: MSE(·10 2 ) for Design A (top) and Design B (bottom) as function of copies on LeNet5 trained for MNIST classification. The pale area contains the 95%-confidence intervals.
Figure 4: Relative accuracy for Design A (top) and Design B (bottom) as function of copies on LeNet5 trained for MNIST classification. The pale area contains the 56.5%-confidence intervals.
Figure 5: Accuracy of LeNet ONNs, depending on the amount of inserted identity layers and the variance level of the ONN, for (a) a network with tanh activation function and one copy, (b) a network with ReLU activation function and one copy, (c) a network with linear activation function and one copy, (d) a network with tanh activation function and two copies, (e) a network with ReLU activation function and two copies, (f) a network with linear activation function and two copies.

+1

Noise-Resilient Designs for Optical Neural Networks
  • Preprint
  • File available

August 2023

·

53 Reads

Gianluca Kosmella

·

·

Jaron Sanders

All analog signal processing is fundamentally subject to noise, and this is also the case in modern implementations of Optical Neural Networks (ONNs). Therefore, to mitigate noise in ONNs, we propose two designs that are constructed from a given, possibly trained, Neural Network (NN) that one wishes to implement. Both designs have the capability that the resulting ONNs gives outputs close to the desired NN. To establish the latter, we analyze the designs mathematically. Specifically, we investigate a probabilistic framework for the first design that establishes that the design is correct, i.e., for any feed-forward NN with Lipschitz continuous activation functions, an ONN can be constructed that produces output arbitrarily close to the original. ONNs constructed with the first design thus also inherit the universal approximation property of NNs. For the second design, we restrict the analysis to NNs with linear activation functions and characterize the ONNs' output distribution using exact formulas. Finally, we report on numerical experiments with LeNet ONNs that give insight into the number of components required in these designs for certain accuracy gains. We specifically study the effect of noise as a function of the depth of an ONN. The results indicate that in practice, adding just a few components in the manner of the first or the second design can already be expected to increase the accuracy of ONNs considerably.

Download