In this study, a method utilizing deep learning-based classification algorithms has been created to automate the observation-based postearthquake damage assessment.Given the significant discrepancy between the number of experts involved in the post-earthquake damage assessment process and the damaged building, the study aims to determine whether interventions are carried out in a timely manner. Furthermore, there is doubt about the usefulness of using supplementary instruments to hasten the decision-making process while assessing the level of building damage. In order to construct a data set for this study, specialists who visited the earthquake zone and the damage assessment website provided photos of reinforced concrete structures after the earthquake. The data set consists of damage assessment images obtained from the earthquakes in Kahramanmaraş and Hatay after the February 6, 2023 Kahramanmaraş earthquakes.. Because it was dangerous to get in the damaged buildings, the number of data stayed below the desired number.Therefore, the first data set was produced using the vertical mirroring approach underwent the data augmentation procedure. The authors initially divided the gathered data set into two categories: those that caused structural damage to reinforced concrete buildings and those that did not. One of the deep learning techniques, a convolutional neural network model was trained using these data. Prior to extracting features, the model identified a few patterns from the pixel sequences in these pictures.Then, this trained model was fed the damage images from the test dataset. Based on the lerning patterns the neural network model produced predictions about the class with a high probability. The convolutional neural network model was trained using 32 batches of data segmentation and 100 epochs. Both the training loss and validation loss error values decreased during the training phase. Furthermore, during the training process, there was an increase in both the training and validation accuracy. The graphs showed that 96.18% was the greatest training accuracy value. Furthermore, the loss value with the lowest measurement was 0.11. This outcome is further supported by the loss function graphs for the quantity of repeats. The model validated the success achieved using the test data in accordance with these findings. The structural forecast of the model and the authors' label coincided in the first tested image. The nonstructural model forecast and the authors' class predictions agreed in the second test image as well.