Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

As chip sizes decrease and node dimensions break the sub-10 nm barrier, Line Edge Roughness (LER) metrology becomes a critical issue for the semiconductor research and industry. Scanning Electron Microscopy (SEM) imaging being the widely used tool for LER metrology suffers from the presence of noise that degrades measurement accuracy. To solve this issue without damaging the measured pattern, the applicability of deep Convolutional Neural Networks (CNNs) is explored, tackling the problem at the image level. The SEM image Denoising model (SEMD) is trained on synthesized image data to detect the variations of noise and provides as an output a denoised image of the pattern. The results are presented and compared with the state-of-art predictions, showing the effectiveness and enhanced performance of the SEMD method.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The radiated region sat- urates to form a contamination layer and suppresses further emission of electrons [37]. Due to the risk of sample damage, there is a persisting interest in the hardware assurance community for low-dose imaging [38]. Residue: Any foreign remnants on the surface of the die that prevents observation and resolution of surface-level features on the IC die can be called a residue. ...
... They include spatial filtering approaches, including Gaussian, median, curvature, anisotropic diffusion, wavelet, adaptive wiener filter, and hysteresis smoothing [66]- [69]. Simple high-frequency filtering and DL-based denoising approaches have also been used on SEM images [38]. These techniques are mostly naive image processing techniques and do not take the semantics of structures in the image into account. ...
... Therefore, it learns the image residual (noise) by subtracting the latent clean images from the noisy input. This architecture has been successfully leveraged for EWR/LWR estimation on images with Gaussian-Poisson mixture noise [34], [38]. However, DnCNN is often criticized for easily overfitting to a specific noise model and cannot maintain the same performance on real noisy images. ...
Article
Full-text available
Comprehensive hardware assurance approaches guaranteeing trust on Integrated Circuits (ICs) typically require the verification of the IC design layout and functionality through destructive Reverse Engineering (RE). It is a resource intensive process that will benefit greatly from the extensive integration of data-driven paradigms, especially in the imaging and image analysis phase. Although obvious, this uptake of data-driven approaches into RE-assisted hardware assurance is lagging due to the lack of massive amounts of high-quality labelled data. In this paper, a large-scale synthetic Scanning Electron Microscopy (SEM) dataset, REFICS, is introduced to address this issue. The dataset, the first open-source dataset in the RE community, consists of 800,000 SEM images over two node technologies, 32nm and 90nm, and four cardinal layers of the IC, namely, doping, polysilicon, contact and metal layers. Furthermore, a framework, based on uncertainty and risk, is introduced to compare the efficacy and benefits of existing RE workflows utilizing ad-hoc steps in its execution. These developments are critical in developing RE-assisted hardware assurance into a scalable, automated and fault-tolerant approach. Finally, the work is concluded with the performance analysis of existing machine learning and deep learning approaches for image analysis in RE and hardware assurance.
... For a given beam width and beam current, to achieve a desired spatial resolution, the dwell time per pixel needs to be reduced to increase the scanning rate. However, reducing the dwell time to speed up the scanning will correspondingly reduce the beam dose per pixel which in turn affects the secondary electron (SE) emission and backscattered electron emission per pixel that constitute the signal strength [5][6][7]. This results in a reduction of signal-to-noise ratio (SNR). ...
... Though there are some similarities between the noise processes in SEMs and low-light imaging, low-dose computed (computer) tomographic imaging, etc it may be noted that noise processes in SEM imaging comes from several different stages due to the fact that the electron has charge and mass and its interaction with the specimen is quite different than photons [12]. In the context of SEMs, D-CNN has been used to reduce noise in line edge roughness measurements [7], random sparse scanning in electron microscopy [4], robust autofocusing deep learning network [40,41] and other work [42]. ...
Article
We report noise reduction and image enhancement in Scanning Electron Microscope (SEM) imaging while maintaining a Fast-Scan rate during imaging, using a Deep Convolutional Neural Network (D-CNN). SEM images of non-conducting samples without conducting coating always suffer from charging phenomenon, giving rise to SEM images with low contrast or anomalous contrast and permanent damage to the sample. One of the ways to avoid this effect is to use Fast-Scan mode, which suppresses the charging effect fairly well. Unfortunately, this also introduces noise and gives blurred images. The D-CNN has been used to predict relatively noise-free images as obtained from a Slow-Scan from a noisy, Fast-Scan image. The predicted images from D-CNN have the sharpness of images obtained from a Slow-Scan rate while reducing the charging effect due to images obtained from Fast-Scan rates. We show that using the present method, and it is possible to increase the scanning rate by a factor of about seven with an output of image quality comparable to that of the Slow-Scan mode. We present experimental results in support of the proposed method.
... They include spatial filtering approaches, including Gaussian, median, curvature, anisotropic diffusion, wavelet, adaptive wiener filter, and hysteresis smoothing [35,3,46,36]. Simple high-frequency filtering and DL-based denoising approaches have also been used on SEM images [17]. ML-based denoising approaches, such as image inpainting, super-resolution and dictionary-based sparse reconstruction, have also been explored for SEM images [28,45,7,30]. ...
... DnCNN is the most used architecture in SEM related applications. It has been leveraged for EWR/LWR estimation on images with unknown levels of Gaussian-Poisson mixture noise [7,17]. However, DnCNN is often criticized for easily over-fitting to a specific noise model. ...
Conference Paper
Full-text available
Hardware assurance is a key process in ensuring the integrity, security and functionality of a hardware device. Its heavy reliance on images, especially on Scanning Electron Microscopy images, makes it an excellent candidate for the vision community. The goal of this paper is to provide a pathway for inter-community collaboration by introducing the existing challenges for hardware assurance on integrated circuits in the context of computer vision and support further development using a large-scale dataset with 800,000 images. A detailed benchmark of existing vision approaches in hardware assurance on the dataset is also included for quantitative insights into the problem.
... Moreover, clay compressibility has been investigated using machine learning algorithms [14]. Deep learning has been adopted for denoising of SEM images [15], identification of rock pore structures and permeabilities [16], image segmentation for mineral characterization [14], and also for nanoparticles detection [17]. Finally, researchers have discussed if deep learning can recognize microstructures as well as the trained human eye [18]. ...
... Typically seen mineralogical contents in the SEM images of sandstone reservoir rock published in the literature. (1)[34],(2-6)[36],(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) [56],(17)(18)(19)(20)(21) [57],(22)(23) [58],(24)(25)(26)(27)(28)(29)(30)(31)(32)(33) [59],(34)(35)(36)(37)(38)(39)(40)(41)(42) [60],(43)(44)(45)(46) [61],(47)(48) [62],(49)(50)(51)(52)(53)(54)(55)(56)(57)(58) f[36],(59) [63],(60)(61)(62)(63) [64],(64)(65)(66)(67)(68)(69)(70)(71) [65].J o u r n a l P r e -p r o o f ...
Article
The presence of clays in hydrocarbon reservoirs challenges the producible amount of oil and gas significantly. Therefore, this study reports a detailed quantitative characterization of clays' specific properties from two fundamental aspects which include clays' type and amount, and their impact on reservoir's fluid flow. We used Scanning Electron Microscopy (SEM) images and respectively adopted deep learning for typing and quantifying clays, and the Lattice-Boltzmann Method (LBM) for flow simulations with and without the presence of clays. The trained deep learning model of the present study was translated into a MATLAB application that is a convenient tool for clay characterization by the future user. This model was trained using 2160 images of different clay minerals based on transfer learning using AlexNet and resulted in more than 95.4% accuracy while applied on the unforeseen images. Moreover, we established the technique of depth-slicing of 2D SEM images, which provides the possibility of 3D processing of the routine SEM images. The results from this technique proved that clays could reduce reservoir porosity and permeability by more than 30% and 400 mD, respectively. The introduced approach of the present study provides new insights into the detailed impacts of clay minerals on the reservoir's quality.
... With significant development in SEM imaging, especially in scan generators, it is possible to acquire significantly better images at lower dwelling times. However, there is a persisting interest in the community for low-dose imaging to reduce sample damage [64]. There have been several attempts to model the noise introduced by lower dose imaging in the microscopy community, but the influence of noise in recovering feature in semiconductor RE has not been studied in detail. ...
... They include spatial filtering approaches, including Gaussian, median, curvature, anisotropic diffusion, wavelet, adaptive wiener filter, and hysteresis smoothing [5,130,132,195]. Simple high-frequency filtering and deep learning-based denoising approaches have also been used on SEM images [64]. These techniques are mostly naive image processing techniques and do not take the semantics of structures in the image into account. ...
Preprint
Full-text available
In the context of hardware trust and assurance, reverse engineering has been often considered as an illegal action. Generally speaking, reverse engineering aims to retrieve information from a product, i.e., integrated circuits (ICs) and printed circuit boards (PCBs) in hardware security-related scenarios, in the hope of understanding the functionality of the device and determining its constituent components. Hence, it can raise serious issues concerning Intellectual Property (IP) infringement, the (in)effectiveness of security-related measures, and even new opportunities for injecting hardware Trojans. Ironically, reverse engineering can enable IP owners to verify and validate the design. Nevertheless, this cannot be achieved without overcoming numerous obstacles that limit successful outcomes of the reverse engineering process. This paper surveys these challenges from two complementary perspectives: image processing and machine learning. These two fields of study form a firm basis for the enhancement of efficiency and accuracy of reverse engineering processes for both PCBs and ICs. In summary, therefore, this paper presents a roadmap indicating clearly the actions to be taken to fulfill hardware trust and assurance objectives.
... That is, a restoration network is trained so that error between an output restored image and its ground-truth image is minimized. In previous methods of microscopy image restoration, [37][38][39][40] a single image is fed into a network in order to restore this image. On the other hand, in our proposed method, as with SR mentioned above, time-lapse SPM images are employed for improving the quality of the output image. ...
... That is, it is difficult to have a pair of a clean image and its SPM-specific degraded image for training a restoration model. In previous methods 37,38) for denoising microscopy images, empirically-produced synthetic degradations are given in order to artificially have the paired images. However, if these artificially-reproduced degradations are different from those observed in real SPM images, this difference makes it difficult to train the restoration model that is applicable to the real SPM images. ...
Article
Full-text available
This paper presents methods for enhancing and restoring Scanning Probe Microscopy (SPM) images. We focus on image super-resolution as enhancement and image denoising and deblurring as restoration. Assume that almost same time-lapse images are captured in the same area of each specimen. In contrast to a single image, our proposed methods using a recurrent neural network improve the enhancement and restoration of SPM images by merging the time-lapse images in order to acquire a single enhanced/restored image. However, subtle deformations between the time-lapse SPM images and degraded pixels such as noisy and blurred pixels in the SPM image disturb the network to successfully merge the images. For the successful merge, our methods spatially align the time-lapse images and detect degraded pixels based on the characteristic property of SPM images. Experimental results demonstrate that our methods can reconstruct sharp super-resolved images and clean noiseless images.
... The profiles studied in this work correspond to random uncorrelated LER. The is reported to vary between 5 and 40 nm [34], which is consistent with SEM (scanning electron microscope) studies from EUV (extreme ultraviolet) lithography reported by [33], [35]. ...
Article
Full-text available
The impact of different variability sources on the transistor performance increases as devices are scaled-down, being the metal grain granularity (MGG) and the line edge roughness (LER) some of the major contributors to this increase. Variability studies require the simulation of large samples of different device configurations to have statistical significance, increasing the computational cost. A novel Pelgrom-based predictive (PBP) model that estimates the impact of MGG and LER through the study of the threshold voltage standard deviation $\left(\sigma V_{T h}\right)$ , is proposed. This technique is computationally efficient since once the threshold voltage mismatch is calculated, $\sigma V_{T h}$ can be predicted for different gate lengths $\left(L_{g}\right)$ , cross-sections, and intrinsic variability parameters, without further simulations. The validity of the PBP model is demonstrated for three state-of-the-art architectures (FinFETs, nanowire FETs, and nanosheet FETs) with different $L_{g}$ , cross-sections, and drain biases $\left(V_{D}\right)$ . The relative errors between the predicted and simulated data are lower than 10%, in the 92% of the cases.
... Convolutional neural networks (CNNs) achieve state-of-theart denoising performance on natural images Tian et al., 2019) and are an emerging tool in various fields of scientific imaging, for example, in fluorescence light microscopy (Belthangady & Royer, 2019;Zhang et al., 2019) and in medical diagnostics (Yang et al., 2017;Jifara et al., 2019). In electron microscopy, deep CNNs are rapidly being developed for denoising in a variety of applications, including structural biology (Buchholz et al., 2019;Bepler et al., 2020), semiconductor metrology (Chaudhary et al., 2019;Giannatou et al., 2019), and drift correction (Vasudevan & Jesse, 2019), among others (Ede & Beanland, 2019;Lee et al., 2020;Wang et al., 2020;Lin et al., 2021;Spurgeon et al., 2021), as highlighted in a recent review (Ede, 2020). CNNs trained for segmentation have also been used to locate the position of atomic columns as well as to estimate their occupancy (Madsen et al., 2018) in relatively high SNR (S)TEM images (i.e., SNR = ∼10). ...
Article
Full-text available
A deep convolutional neural network has been developed to denoise atomic-resolution transmission electron microscope image datasets of nanoparticles acquired using direct electron counting detectors, for applications where the image signal is severely limited by shot noise. The network was applied to a model system of CeO 2 -supported Pt nanoparticles. We leverage multislice image simulations to generate a large and flexible dataset for training the network. The proposed network outperforms state-of-the-art denoising methods on both simulated and experimental test data. Factors contributing to the performance are identified, including (a) the geometry of the images used during training and (b) the size of the network's receptive field. Through a gradient-based analysis, we investigate the mechanisms learned by the network to denoise experimental images. This shows that the network exploits both extended and local information in the noisy measurements, for example, by adapting its filtering approach when it encounters atomic-level defects at the nanoparticle surface. Extensive analysis has been done to characterize the network's ability to correctly predict the exact atomic structure at the nanoparticle surface. Finally, we develop an approach based on the log-likelihood ratio test that provides a quantitative measure of the agreement between the noisy observation and the atomic-level structure in the network-denoised image.
... Mike Williamson and Andrew Neureuther introduced A recovering more accurate line edge roughness (LER) data from SEM images by using an image deblurring technique, this study prove that LER can vary significantly before and after deblurring [6]. E. Giannatou and others, They trained SEM image Denoising model (SEMD) on synthesized image data to find the variations of noise and provides as an output a denoised image of the pattern [7]. ...
Article
Full-text available
Scanning electron microscopy (SEM) images is a special and important type of images that need special enhancement to get more information for the nanoscale phenomenon. Many studies have been introduced in this purpose with the traditional enhancement and suggested methods but here in this study a new method represented in a hybrid filter formed by mean and Sobel that are common filters in enhancement and edge detection respectively. The results are evidence for the success of the new filter using a stander image with many noise levels before applying it on the SEM image. The new filter gave better results comparing with the mean filter as a traditional enhancement filter. The new filter is added as new technique filter for Nano scale images.
... Note, as shown in Figure 5 (a), the image areas used for alignment do not have to be in the same area, but they should have the same physical size. • Image noise removal: Although SEM image noise has not been studied comprehensively, [31] and [32] successfully reconstructed lines and edges in SEM images based on the assumption of existing Gaussian noise. Therefore, a Gaussian filter is applied in this study, and a bilateral filter is followed to smooth the cell edges further. ...
Conference Paper
Full-text available
Object localization is an essential step in image-based hardware assurance applications to navigate the view to the target location. Existing localization methods are well-developed for applications in many other research fields; however, limited study has been conducted to explore an accurate yet efficient solution in hardware assurance domain. To this end, this paper discusses the challenges of leveraging existing object localization methods from three aspects using the example scenario of IC Trojan detection and proposes a novel knowledge-based object localization method. The proposed method is inspired by the 2D string search algorithm; it also couples a mask window to preserve target topology, which enables multi-target localization. Evaluations are conducted on 61 test cases from five images of three node-technologies. The results validate the accuracy, time-efficiency, and the generalizability of the proposed method of locating multi-target from SEM images for hardware assurance applications.
... Deep learning-based CNNs achieve state-of-the-art denoising performance on natural images and are an emerging tool in various fields of scientific imaging, for example, in fluorescence light microscopy (Zhang et al., 2019;Belthangady & Royer, 2019) and in medical diagnostics (Yang et al., 2017;Jifara et al., 2019). In electron microscopy, deep CNNs are rapidly being developed for denoising in a variety of applications, including structural biology (Buchholz et al., 2019;Bepler et al., 2020), semiconductor metrology (Chaudhary et al., 2019;Giannatou et al., 2019), and drift correction (Vasudevan & Jesse, 2019), among others (Ede & Beanland, 2019), as highlighted in a recent review (Ede, 2020). ...
Preprint
A deep learning-based convolutional neural network has been developed to denoise atomic-resolution in situ TEM image datasets of catalyst nanoparticles acquired on high speed, direct electron counting detectors, where the signal is severely limited by shot noise. The network was applied to a model catalyst of CeO2-supported Pt nanoparticles. We leverage multislice simulation to generate a large and flexible dataset for training and testing the network. The proposed network outperforms state-of-the-art denoising methods by a significant margin both on simulated and experimental test data. Factors contributing to the performance are identified, including most importantly (a) the geometry of the images used during training and (b) the size of the network's receptive field. Through a gradient-based analysis, we investigate the mechanisms used by the network to denoise experimental images. This shows the network exploits information on the surrounding structure and that it adapts its filtering approach when it encounters atomic-level defects at the catalyst surface. Extensive analysis has been done to characterize the network's ability to correctly predict the exact atomic structure at the catalyst surface. Finally, we develop an approach based on the log-likelihood ratio test that provides an quantitative measure of uncertainty regarding the atomic-level structure in the network-denoised image.
... For example, suitably optimised DnCNN method can be successfully used for denoising of scanning electron microscopy (SEM) images. 16,17 This also improves the accuracy nanometre-scale SEM measurements. Some scholars applied the previously proposed network to mapping between images, giving the network structure a new application prospect. ...
Article
Through‐focus scanning optical microscopy (TSOM) is a model‐based nanoscale metrology technique which combines conventional bright‐field microscopy and the relevant numerical simulations. A TSOM image is generated after through‐focus scanning and data processing. However, the mechanical vibration and optical noise introduced into the TSOM image during image generation can affect the measurement accuracy. To reduce this effect, this paper proposes a imaging error compensation method for the TSOM image based on deep learning with U‐Net. Here, the simulated TSOM image is regarded as the ground truth, and the U‐Net is trained using the experimental TSOM images by means of a supervised learning strategy. The experimental TSOM image is first encoded and then decoded with the U‐shaped structure of the U‐Net. The difference between the experimental and simulated TSOM images is minimised by iteratively updating the weights and bias factors of the network, to obtain the compensated TSOM image. The proposed method is applied for optimising the TSOM images for nanoscale linewidth estimation. The results demonstrate that the proposed method performs as expected and provides a significant enhancement in accuracy. This article is protected by copyright. All rights reserved
... The last convolution layer for each network consists of a single filter with kernel size (see Fig. 7). Giannatou et al. [106] proposed the residual learning CNN (SEMD) for noise removal in scanning electron microscopic images. SEMD is a residual learning method inspired by the DnCNN and trained to estimate the noise at each pixel of a noisy image. ...
Article
Full-text available
Image denoising faces significant challenges, arising from the sources of noise. Specifically, Gaussian, impulse, salt, pepper, and speckle noise are complicated sources of noise in imaging. Convolutional neural network (CNN) has increasingly received attention in image denoising task. Several CNN methods for denoising images have been studied. These methods used different datasets for evaluation. In this paper, we offer an elaborate study on different CNN techniques used in image denoising. Different CNN methods for image denoising were categorized and analyzed. Popular datasets used for evaluating CNN image denoising methods were investigated. Several CNN image denoising papers were selected for review and analysis. Motivations and principles of CNN methods were outlined. Some state-of-the-arts CNN image denoising methods were depicted in graphical forms, while other methods were elaborately explained. We proposed a review of image denoising with CNN. Previous and recent papers on image denoising with CNN were selected. Potential challenges and directions for future research were equally fully explicated.
... Apart from the SEM measurements, the important data can be collected via nondestructive advanced methods, such as X-ray diffraction contrast tomography and nearfield high-energy X-ray diffraction microscopy (nf-HEDM). Furthermore, deep learning and artificial intelligence (AI) algorithms [78] are the future in studies searching for the correlation between biofilm and structures, exactly as has already happened in MPs spectroscopy [79]. However, the proper data sets are crucial to train AI algorithms. ...
Article
Full-text available
The constantly growing amount of synthetic materials < 5 mm, called microplastics (MPs), is fragmented in the environment. Thus, their surface, Plastisphere, is substantially increasing forming an entirely new ecological niche. It has already been extensively studied by microbiologists observing the biofilm and by material scientists interested in the weathering of polymer materials. This paper aims to construct a bridge between the physical and chemical description of the Plastisphere and its microbiological and ecological significance. Various algorithms, based on the analysis of pictures obtained by scanning electron microscopy (SEM), are proposed to describe in detail the morphology of naturally weathered polymers. In particular, one can study the size and distribution of fibres in a standard filter, search the synthetic debris for mapping, estimate the grain size distribution, quantitatively characterize the different patterns of degradation for polymer spheres and ghost nets, or calculate the number of pores per surface. The description and visualization of a texture, as well as the classification of different morphologies present on a surface, are indispensable for the comprehensive characterization of weathered polymers found inside animals (e.g., fishes). All these approaches are presented as case studies and discussed within this work.
... For example, five types of machine learning methods and one deep learning model were applied, respectively, to achieve the pixel-level mineral classification of SEM-EDS images [17]. Deep learning was used to remove SEM image noise and improved the measurement accuracy of line edge roughness [18]. U-Net [19] was used to analyze SEM images of mineral characterization, which distinguished effectively the mixed matrix mineral particles and organic clay aggregates [20]. ...
Article
Full-text available
The scanning electron microscope (SEM) is widely used in the analysis and research of materials, including fracture analysis, microstructure morphology, and nanomaterial analysis. With the rapid development of materials science and computer vision technology, the level of detection technology is constantly improving. In this paper, the deep learning method is used to intelligently identify microcracks in the microscopic morphology of SEM image. A deep learning model based on image level is selected to reduce the interference of other complex microscopic topography, and a detection method with dense continuous bounding boxes suitable for SEM images is proposed. The dense and continuous bounding boxes were used to obtain the local features of the cracks and rotating the bounding boxes to reduce the feature differences between the bounding boxes. Finally, the bounding boxes with filled regression were used to highlight the microcrack detection effect. The results show that the detection accuracy of our approach reached 71.12%, and the highest mIOU reached 64.13%. Also, microcracks in different magnifications and in different backgrounds were detected successfully.
... However, the level of noise present on the exemplary images was never high enough to hinder automatic identification of features. Other examples of ML algorithms for noise removal in microscopy are presented in References [22,23]. ...
Article
Full-text available
In our study, the comparison of the automatically detected precipitates in L-PBF Inconel 625, with experimentally detected phases and with the results of the thermodynamic modeling was used to test their compliance. The combination of the complementary electron microscopy techniques with the microanalysis of chemical composition allowed us to examine the structure and chemical composition of related features. The possibility of automatic detection and identification of precipitated phases based on the STEM-EDS data was presented and discussed. The automatic segmentation of images and identifying of distinguishing regions are based on the processing of STEM-EDS data as multispectral images. Image processing methods and statistical tools are applied to maximize an information gain from data with low signal-to-noise ratio, keeping human interactions on a minimal level. The proposed algorithm allowed for automatic detection of precipitates and identification of interesting regions in the Inconel 625, while significantly reducing the processing time with acceptable quality of results.
... Recently, the development of deep network architectures makes deep learning methods widely applied for denoising in various fields (Azarang and Kehtarnavaz, 2020;Tian et al., 2020). Deep learning-driven denoising techniques, such as autoencoders (Vincent et al., 2010), artificial neural networks (ANN) (Fichou and Morlock, 2018), recurrent neural networks (RNN) (Antczak, 2018;Osako et al., 2015), convolutional neural networks (CNN) (Giannatou et al., 2019;Liu et al., 2014;Chakraborty et al., 2021), can model the non-linear relationships between noise and the clean signal by pre-training whereby the redundant information in the new input signal can be filtered accurately. Compared with conventional denoising methods, deep learning-based denoising approaches can be expected to avoid complicated manual setting parameters and handle different tasks without many modifications. ...
Article
Full-text available
Vibration-based approach is of great importance for structural health monitoring and condition assessment, while inevitable noise existing in field measurement casts great obstacles in corresponding data-driven analysis. It has been a stringent prerequisite to develop effective methods to denoise vibration signal. Hence, a novel denoising approach based on deep convolutional image-denoiser networks (DCIMN) is proposed in this study, the methodology and architecture of which are elaborated. Specified avenues with novelties including noise injection in training labeling, dimension expansion in feature extraction, and optimizer embedding in encoder-decoder are utilized to enhance the denoising performance. Measured vibration data from Shanghai Tower is allocated for validation, based on which modal identifications are also conducted. Detailed evaluation confirms its powerful capability and efficiency in denoising signal. Demanding no prior information of input signal, the proposed method performs vibration signal denoising in an intelligent way, which demonstrates a vast prospect in engineering practice.
... Considering the AI in SEM, the studies are centered on the image processing title. The AI-assisted applications of SEM can be listed as follows: The image re-focusing and de-focusing [11], classification of defective nanomaterials from SEM images [12], image analyze for material generation [13], generating synthetic datasets for detailed SEM analysis [14], classification of nanomaterials using SEM images [15], de-noising of SEM images [16], and resolution enhancement of SEM images [17]. Of course, the list is not limited to those and can be extended, please refer to [18] for other type of applications. ...
Conference Paper
Full-text available
Today, a brand new phenomenon has shined to our daily lives with its magical effects; it is called artificial intelligence (AI). Almost all industrial actors in the field sense shade of the AI with the ability of data processing in their routine operations. The most powerful works that AI can succeed could be listed as computer vision tasks of autonomous applications, forecasting feature, and image recognition ability. AI applications have strong ability on the field applications with their raw data processing feature. The core function of the process in AI is gathering valuable information from the raw data. AI has a key role in many vital applications beginning from remote sensing in space images to agricultural operations. There are still open doors for AI to be applied in a variety of different sectors. In the last decade, there has been a rising in nanotechnology applications using the effectiveness of AI. We call it "AI sense in the nanotechnology". The contact of AI and nanotechnology can form the perspectives for novel technological evolutions, also on a way through the inter-disciplines. In our study, we introduce the modern and driving topics in which they are taking the advantages of AI for their processes along with machine learning and deep learning.
... As the first step in the framework development, we have proposed a deep learning (DL)-based noise filter 16 . Although DL application to image processing and analysis mostly focus on detection 17,18 , segmentation 19 , and classification 20 of objects, DL is also utilized as a denoising technique 21 in the field of electron microscopy. The advantage of the DL-based noise filter is that the noise filter does not have to assume noise types. ...
Article
Full-text available
Application of scanning transmission electron microscopy (STEM) to in situ observation will be essential in the current and emerging data-driven materials science by taking STEM’s high affinity with various analytical options into account. As is well known, STEM’s image acquisition time needs to be further shortened to capture a targeted phenomenon in real-time as STEM’s current temporal resolution is far below the conventional TEM’s. However, rapid image acquisition in the millisecond per frame or faster generally causes image distortion, poor electron signals, and unidirectional blurring, which are obstacles for realizing video-rate STEM observation. Here we show an image correction framework integrating deep learning (DL)-based denoising and image distortion correction schemes optimized for STEM rapid image acquisition. By comparing a series of distortion corrected rapid scan images with corresponding regular scan speed images, the trained DL network is shown to remove not only the statistical noise but also the unidirectional blurring. This result demonstrates that rapid as well as high-quality image acquisition by STEM without hardware modification can be established by the DL. The DL-based noise filter could be applied to in-situ observation, such as dislocation activities under external stimuli, with high spatio-temporal resolution.
... The reference metrology for measuring the 3D sidewall's shape can be used for evaluating SEM and other dimensional-measurement techniques such as tilt-beam (or 3D-) SEM and normal AFM, and will be useful for providing reference input data for deep learning-aided LER metrology. 24,25 ...
Article
Background: Conventional scanning electron microscopy (SEM) that is used for 2D top-view metrology, a classical line edge roughness (LER) measurement technique, is incapable of measuring 3D structures of a nanoscale line pattern. For LER measurements, SEM measurement generates a single line-edge profile for the 3D sidewall roughness, although the line-edge profile differs at each height in the 3D sidewall. Aim: To develop an evaluation method of SEM-based LER measurement techniques and to verify how the 3D sidewall shape is reflected in the SEM's 2D result. Approach: Direct comparison by measuring an identical location of a line pattern by SEM and an atomic force microscopy (AFM) with the tip-tilting technique that is capable of measuring the 3D sidewall. The line pattern has vertical stripes on the sidewall due to its fabrication process. Measured line edge profiles were analyzed using power spectral density, height-height correlation function, and autocorrelation function. Results: Line edge profiles measured by SEM and AFM were well matched except for noise level. Frequency and scaling analyses showed that SEM profile contained high noise and had lost a property of self-affine fractals in contrast to AFM. Conclusions: In the case of the line pattern with vertical stripes on the sidewall, SEM profile is generally consistent with 3D sidewall shape. The AFM-based LER measurement technique is useful as LER reference metrology to evaluate other LER measurement techniques. © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its.
... However, chirality detection requires a more potent methodology of deep learning (DL)a subfield of ML that employs neural networks to solve complex human-like image recognition problems. DL has been widely employed in computer vision 20 and is even more advantageous in EM by improving signal-to-noise ratio, aberration correction, and reducing specimen drift [21][22][23][24][25][26][27][28] thereby increasing the resolution of SEM, [29][30][31] STEM, 32 and TEM. 33 Other emerging applications of DL include image labelling for identifying different image regions [34][35][36][37][38] and semantic segmentation classifying pixels into discrete categories. ...
Preprint
Full-text available
Chirality of helical objects, exemplified by nanostructured inorganic particles, has unifying importance for many scientific fields. Their handedness can be determined visually, but its identification by analysis of electron microscopy images is fundamentally difficult because (1) image features differentiating left- and right-handed particles can be ambiguous and ancillary, and (2) three-dimensional particle structure essential for chirality is 'flattened' into two-dimensional projections. Here we show that deep learning algorithms can reliably identify and classify twisted bowtie-shaped microparticles in scanning electron microscopy images with accuracy as high as 94.4% having been trained on as few as 180 images. Furthermore, after training on bowtie particles with complex nanostructured features, the model can recognize other chiral shapes with different geometries without re-training. These findings indicate that deep learning can potentially replicate the visual analysis of chiral objects by humans and enable automated analysis of microscopy data for the accelerated discovery of chiral materials.
... In ref [216], there were attempts to denoise SEM images the authors employed convolutional neural networks to discover the undelaying link between noisy and noise-free pictures. The purpose of this work was to investigate the applicability of deep learning-based approaches to circumvent the necessity for SEM equipment to use high-energy electron beams. ...
Article
Full-text available
Colloidal material design necessitates a collection of computer approaches ranging from quantum chemistry to molecular dynamics and continuum modeling. Machine learning (ML) and other umbrella terminology for current optimization approaches (requiring computation) have accelerated the predictability of material characteristics. Colloidal materials include polymers, liquid crystals, and colloids. Supervised and unsupervised strategies have come under scrutiny in this review. Other ways, such as combined simulation of ML and molecular modeling dynamics procedures, are also available that are not available through the present arsenal of characterization tools. Such hybrid approaches can improve our understanding of materials and design protocols. In this review, we have accumulated expertise and information from over 300 sources.
... Poisson noise originates from the varying number of electrons that hit the specimen at each measurement spot. Gaussian noise is the result of microscope electronics 25 . Speckle noise rarely a problem for SEM or CT scan imaging, however, we add this type of noise to our images to further complicating the denoising task. ...
Article
Full-text available
Imaging methods have broad applications in geosciences. Scanning electron microscopy (SEM) and micro-CT scanning have been applied for studying various geological problems. Despite significant advances in imaging capabilities, and image processing algorithms, acquiring high-quality data from images is still challenging and time-consuming. Obtaining a 3D representative volume for a tight rock sample takes days to weeks. Image artifacts such as noise further complicate the use of imaging methods for the determination of rock properties. In this study, we present applications of several convolutional neural networks (CNN) for rapid image denoising, deblurring and super-resolving digital rock images. Such an approach enables rapid imaging of larger samples, which in turn improves the statistical relevance of the subsequent analysis. We demonstrate the application of several CNNs for image restoration applicable to scientific imaging. The results show that images can be denoised without a priori knowledge of the noise with great confidence. Furthermore, we show how attaching several CNNs in an end-to-end fashion can improve the final quality of reconstruction. Our experiments with SEM and CT scan images of several rock types show image denoising, deblurring and super-resolution can be performed simultaneously.
Preprint
Full-text available
Application of scanning transmission electron microscopy (STEM) to in situ observation will be essential in the current and emerging data-driven materials science by taking STEM’s high affinity with various analytical options into account. As is well known, STEM’s image acquisition time needs to be further shortened to capture a targeted phenomenon in real time as STEM’s current temporal resolution is far below the conventional TEM’s. However, rapid image acquisition in the millisecond per frame or faster generally causes image distortion, poor electron signals, and unidirectional blurring, which are obstacles for realizing video-rate STEM observation. Here we show an image correction framework integrating deep learning (DL)-based denoising and image distortion correction schemes optimized for STEM rapid image acquisition. By comparing a series of distortion corrected rapid scan images with corresponding regular scan speed images, the trained DL network is proven to remove not only the statistical noise but also the unidirectional blurring. This result demonstrates that rapid as well as high-quality image acquisition by STEM without hardware modification can be established by the DL. The DL-based noise filter could be applied to in-situ observation, such as dislocation activities under external stimuli, with high spatio-temporal resolution.
Article
In the context of hardware trust and assurance, reverse engineering has been often considered as an illegal action. Generally speaking, reverse engineering aims to retrieve information from a product, i.e., integrated circuits (ICs) and printed circuit boards (PCBs) in hardware security-related scenarios, in the hope of understanding the functionality of the device and determining its constituent components. Hence, it can raise serious issues concerning Intellectual Property (IP) infringement, the (in)effectiveness of security-related measures, and even new opportunities for injecting hardware Trojans. Ironically, reverse engineering can enable IP owners to verify and validate the design. Nevertheless, this cannot be achieved without overcoming numerous obstacles that limit successful outcomes of the reverse engineering process. This article surveys these challenges from two complementary perspectives: image processing and machine learning. These two fields of study form a firm basis for the enhancement of efficiency and accuracy of reverse engineering processes for both PCBs and ICs. In summary, therefore, this article presents a roadmap indicating clearly the actions to be taken to fulfill hardware trust and assurance objectives.
Preprint
Full-text available
The constantly growing production of synthetic materials and their presence in the environment gradually transform our Blue Planet into the Plastic One. Microplastics (MPs) enlarge significantly their surface during fragmentation processes. Undoubtedly, nanoplastics (NPs), emerging contaminants, and the Plastisphere, the total available surface of debris, are currently on the edge of science. Although a few research are dedicated to the analysis of MPs and NPs from the physical and chemical point of view, there is a lack of the correlation between the material characterization and the microbiological data. The ecological approach, covering the description of numerical antibiotic or metal resistance bacteria, dealing with toxicological issues or biodegradation, is of great importance. This paper creates the bridge between the material science approach and the eighth continent (as sometimes Plastisphere is called). It points out that the Plastisphere significance will grow within the coming years and it should not be regarded as one ecological niche, but a set of different ones. As the properties mainly depend on the surface morphology, its numerical characterization will be the base for the classification purposes to better describe and model this phenomenon. Apart from concerning the currently important issues of NPs and the Plastisphere, this paper presents the emerging area of research namely the numerical approach to their characterization. This proposal of an interdisciplinary approach to the classification of the Plastisphere's types might be interesting for the members of different scientific communities: nanotechnology, material science and engineering, chemistry, physics, ecology, microbiology, marine microplastics or picture analysis.
Article
Full-text available
When uploading multimedia data such as photos or videos on social network services, websites, and so on, certain parts of the human body or personal information are often exposed. Therefore, it is frequent that the face of a person is blurred out to protect the portrait right of a particular person, and that repulsive objects are covered with mosaic blocks to prevent others from feeling disgusted. In this paper, an algorithm that detects mosaic regions blurring out certain blocks based on the edge projection is proposed. The proposed algorithm initially detects the edge and uses the horizontal and vertical line edge projections to detect the mosaic candidate blocks. Subsequently, geometrical features such as size, aspect ratio and compactness are used to filter the candidate mosaic blocks, and the actual mosaic blocks are finally detected. The experiment results showed that the proposed algorithm detected mosaic blocks more accurately than other methods.
Article
Full-text available
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy.
Book
Full-text available
Additive manufacturing (AM) or, more commonly, 3D printing is one of the fundamental elements of Industry 4.0. and the fourth industrial revolution. It has shown its potential example in the medical, automotive, aerospace, and spare part sectors. Personal manufacturing, complex and optimized parts, short series manufacturing and local on-demand manufacturing are some of the current benefits. Businesses based on AM have experienced double-digit growth in recent years. Accordingly, we have witnessed considerable efforts in developing processes and materials in terms of speed, costs, and availability. These open up new applications and business case possibilities all the time, which were not previously in existence. Most research has focused on material and AM process development or effort to utilize existing materials and processes for industrial applications. However, improving the understanding and simulation of materials and AM process and understanding the effect of different steps in the AM workflow can increase the performance even more. The best way of benefitting from AM is to understand all the steps related to that—from the design and simulation to additive manufacturing and post-processing ending the actual application. The objective of this Special Issue was to provide a forum for researchers and practitioners to exchange their latest achievements and identify critical issues and challenges for future investigations on “Modeling, Simulation and Data Processing for Additive Manufacturing”. The Special Issue consists of 10 original full-length articles on the topic
Article
Denoising is a fundamental challenge in scientific imaging. Deep convolutional neural networks (CNNs) provide the current state of the art in denoising photographic images. However, their potential has been inadequately explored for scientific imaging. Denoising CNNs are typically trained on clean images corrupted with artificial noise, but in scientific applications, noiseless ground-truth images are usually not available. To address this, we propose a simulation-based denoising (SBD) framework, in which CNNs are trained on simulated images. We test the framework on transmission electron microscopy (TEM) data, showing that it outperforms existing techniques on a simulated benchmark dataset, and on real data. We analyze the generalization capability of SBD, demonstrating that the trained networks are robust to variations of imaging parameters and of the underlying signal structure. Our results reveal that state-of-the-art architectures for denoising photographic images may not be well adapted to scientific-imaging data. For instance, substantially increasing their field-of-view dramatically improves their performance on TEM images acquired at low signal-to-noise ratios. We also demonstrate that standard performance metrics for photographs (such as peak signal-to-noise ratio) may not be scientifically meaningful, and propose several metrics to remedy this issue in the case of TEM images. In addition, we propose a technique, based on likelihood computations, to visualize the agreement between the structure of the denoised images and the observed data. Finally, we release a publicly available benchmark dataset containing 18,000 simulated TEM images.
Conference Paper
Full-text available
Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.
Article
Full-text available
Purpose: Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. Method: We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Results: Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." Conclusions: To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research.
Article
Full-text available
Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
Conference Paper
Full-text available
Linewidth roughness (LWR) is usually estimated simply as three standard deviations of the linewidth. The effect of image noise upon this metric includes a positive nonrandom component. The metric is therefore subject to a bias or "systematic error" that we have estimated can be comparable in size to the roughness itself for samples as smooth as required by the industry roadmap. We illustrate the problem using scanning electron microscope images of rough lines. We propose simple changes to the measurement algorithm that, if adopted by metrology instrument suppliers, would permit estimation of LWR without bias caused by image noise.
Article
Full-text available
As the transistors are scaled down, undesirable performance mismatch in identically designed transistors in-creases and hence causes greater impact on circuit performance and yield. Since Line-End Roughness (LER) has been reported to be in the order of several nanometers and not to decrease as the device shrinks, it has evolved as a critical problem in the sub-45nm devices and may lead to serious device parameter fluctuations and performance limitation for the future VLSI circuit application. Although LER is a kind of random variation, it is undesirable and has to be analyzed because it causes the device to fluctuate. In this paper, we present a new cell characterization methodology which uses the non-rectangular gate print-images generated by lithography and etch simulations with the random LER variation to estimate the device performance of a sub-45nm design. The physics based TCAD simulation tool is used for validating the accuracy of our LER model. We systematically analyze the random LER by taking the impact on circuit performance due to LER variation into consideration and suggest the maximum tolerance of LER to minimize the performance degradation. We observed that the driving current is highly affected by LER as the gate length becomes thinner. We performed lithography simula-tions using 45nm process window to examine the LER impact of the state-of-the-art industrial devices. Results show that the rms value of LER is as much as 10% from its nominal line edge, and the saturation current can vary by as much as 10% in our 2-input NAND cell.
Article
Full-text available
Resolution is a key performance metric, which often defines the quality of a scanning electron microscope (SEM). Traditionally, there is the subjective measurement of the distance between two points on special "resolution" samples and there are several computer-based resolution-calculation methods. These computer-based resolution-calculation methods are much more precise than direct measurement, but none of them can currently be considered an objective way of measuring the resolution. The methods are still under development; therefore, objective testing is necessary. One approach to algorithm testing is to use simulated images. Simulated images are very useful for this purpose because they can be well-defined in all parameters unlike the real SEM images. Simulated images can be generated that closely mimic the gold-on-carbon SEM test sample images that usually consist of bright grains on a dark background. Simulation can account for edge effect, roughness of the substrate, different focusing, drift and vibration, and noise. Shapes, positions, and sizes of the grain structures are random. The simulated images can be then used for testing the resolution-calculation methods, especially for finding how the particular properties of SEM images affect the resultant instrument performance and image resolution. To support this testing, NIST has developed and made available a reference set of simulated SEM images generated using the methods described in this article.
Article
Two fundamental challenges of line edge roughness (LER) metrology are to provide complete and accurate measurement of LER. We focus on recent advances concerning both challenges inspired by mathematical and computational methods. Regarding the challenge of completeness: (a) we elaborate on the multifractal analysis of LER, which decomposes the scaling behavior of edge undulations into a spectrum of fractal dimensions similarly to what a power spectral density (PSD) does in the frequency domain. Emphasis is given on the physical meaning of the multifractal spectrum and its sensitivity to pattern transfer and etching; (b) we present metrics and methods for the quantification of cross-line (interfeature) correlations between the roughness of edges belonging to the same and nearby lines. We will apply these metrics to quantify the correlations in a self-aligned quadruple patterning lithography. Regarding the challenge of accuracy, we present a PSD-based method for a noise-reduced (sometimes called unbiased) LER metrology and validate it through the analysis of synthesized SEM images. Furthermore, the method is extended to the use of the height-height correlation functions to deliver noise-reduced estimation of the correlation length and the roughness exponent of LER. © 2018 Society of Photo-Optical Instrumentation Engineers (SPIE).
Article
Most scanning electron microscope (SEM) measurements of pattern roughness today produce biased results, combining the true feature roughness with noise from the SEM. Further, the bias caused by SEM noise changes with measurement conditions and with the features being measured. The goal of unbiased roughness measurement is to both provide a better estimate of the true feature roughness and to provide measurements that are independent of measurement conditions. Using an inverse linescan model for edge detection, the noise in SEM edge and width measurements can be measured and removed statistically from roughness measurements. This approach was tested using different pixel sizes, magnifications, and frames of averaging on several different post-lithography and post-etch patterns. Over a useful range of metrology conditions, the unbiased roughness measurements were effectively independent of these metrology parameters.
Conference Paper
Monte Carlo-based SEM image simulation can reproduce SEM micrographs by calculating scattering events of primary electrons inside the target materials. By using the simulated SEM images, it is possible to optimize imaging conditions prior to the specimen observation, which could save time for finding suitable observation condition. However, a recent trend of miniaturized and 3-dimentional structures of semiconductor devices, and introduction of various novel materials have created a challenge for such SEM image simulation techniques; that is, more precise and accurate modeling is required. In this paper, we present a quantitatively accurate BSE simulation and a precise parameters setting in voltage contrast simulation, for both to reproduce experimental SEM images accurately. We apply these simulation techniques to optimize the accelerating voltage of SEM for sub-surface imaging, and to analyze a charge distribution on the insulating specimen under the electron irradiation. These applications promise the advancement in developing a new device by preparing inspecting condition in a timely manner.
Article
Measurements of the line edge roughness (LER) and critical dimension (CD) from scanning electron microscope (SEM) images are often required for analyzing circuit patterns transferred onto substrate systems. A common approach is to employ image processing techniques to detect feature boundaries from which the LER and CD are computed. SEM images usually contain a significant level of noise which affects the accuracy of measured LER and CD. This requires reducing the noise level by a certain type of low-pass filter before detecting feature boundaries. However, a low-pass filter also tends to destroy the boundary detail. Therefore, a careful selection of low-pass filter is necessary in order to achieve the high accuracy of LER and CD measurements. In this paper, a practical method to design a Gaussian filter for reducing the noise level in SEM images is proposed. The method utilizes the information extracted from a given SEM image in adaptively determining the sharpness and size of a Gaussian filter. The results from analyzing the effectiveness of the Gaussian filter designed by the proposed method are provided.
Article
One of the great challenges in next generation lithography is to print linear features with controllable sidewall roughness, which is usually called line edge/line width roughness (LER/LWR). The aim of this chapter is to provide an interdisciplinary approach to LER/LWR covering all related aspects. To this end, after a short introduction to LER/LWR concepts, it reports the basic findings of recent intensive research concerning the metrology and characterization, the material and process origins, and the device effects of LER/LWR. Both simulation and experimental results are presented, and emphasis is given to their comparison.
Article
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based an adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory requirements and is well suited for problems that are large in terms of data and/or parameters. The method is also ap- propriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The method exhibits invariance to diagonal rescaling of the gradients by adapting to the geometry of the objective function. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. We demonstrate that Adam works well in practice when experimentally compared to other stochastic optimization methods.
Article
An off-line image analysis algorithm and software is developed for the calculation of line-edge roughness (LER) of resist lines, and is successfully compared with the on-line LER measurements. The effect of several image-processing parameters affecting the fidelity of the off-line LER measurement is examined. The parameters studied include the scanning electron microscopy magnification, the image pixel size dimension, the Gaussian noise-smoothing filter parameters, and the line-edge determination algorithm. The issues of adequate statistics and appropriate sampling frequency are also investigated. The advantages of off-line LER quantification and recommendations for the on-line measurement are discussed. Having introduced a robust algorithm for edge-detection in Paper I, Paper II [V. Constantoudis etal, J. Vac. Sci. Technol. B 21, 1019 (2003)] of this series introduces the appropriate parameters to fully quantify LER. © 2003 American Vacuum Society.
Article
We propose a new method for the evaluation of line-edge or linewidth roughness (LER/LWR). Conventional, directly measured LER/LWR values always contain a random noise contribution, which is called LER/LWR bias. Our method can separate this bias artifact from the true LER/LWR by using a single image of the sample pattern. The idea is based on the dependency of a measured LER/LWR value on the image-processing parameter for noise reduction. Both, the conventional and the new bias-free LER were calculated on series of images with different frame integration numbers but a fixed field of view. In addition, the validity of this method to the gate-LWR measurement on an ArF resist line pattern was examined. The LER/LWR obtained by our method was independent of the frame number, and agreed with the conventional LER/LWR as measured on an image with a sufficiently large frame-number. That is, our method can evaluate LER/LWR without random-noise contribution, suggesting that the method can be applied to images recorded under low-sample-damage conditions (i.e., low signal-to-noise ratio). It is concluded that the proposed bias-free LER/LWR measurement method will be a powerful tool in lithography metrology especially for achieving practical and accurate LER/LWR measurement with low sample damage.
Article
At present, the most widely used technique for Line Edge and Width Roughness (LER/LWR) measurement is based on the analysis of top-down CD-SEM images. However, the presence of noise on these affects importantly the obtained edge morphologies leading to biased LER/LWR measurements. In the last few years, significant progress has been made towards the acquisition of noise-free LER/LWR metrics. The output of all proposed methods is the noise-free rms value Rq estimated using lines with sufficiently long lengths. Nevertheless, one of the recent advances in LER/LWR metrology is the realization that a single Rq value does not provide a complete description of LER and a three parameter model has been introduced including the Rq value at infinite line length, the correlation length xi and the roughness exponent alpha. The latter two parameters describe the spatial fluctuations of edge morphology and can be calculated by the height height correlation function G(r) or the dependence of rms value Rq on line length Rq(L). In this paper, a methodology for noise free estimation of G(r) and Rq(L) is proposed. Following Villarrubia et al. [Proc.SPIE5752, 480 (2005)], we obtain a formula for the noise free estimation of G(r) and assess its predictions by implementing and applying a modeling approach. Also, we extend appropriately the methodology of Yamagutchi et al. [Proc.SPIE6152, 61522D (2006)] to a large range of line lengths and show that it can provide a reliable noise free estimation of the whole Rq(L) curve.
Article
The sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred and/or downsampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm.
Conference Paper
Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these “Stepped Sigmoid Units ” are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors. 1.
Article
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice.
Batch normalization: accelerating deep network training by reducing internal covariate shift
  • S Ioffe
  • C Szegedy
S. Ioffe, C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, International Conference on Machine Learning, 2015, pp. 448-456.