In this literature review paper, we provide an analysis of foundation models, which are AI models trained on broad data that can adapt to various tasks. The paper explores the background, capabilities, and limitations of foundation models, focusing on text and image generation. It discusses the factors contributing to the success of foundation models, including scale and novel AI model architectures. We examine the capabilities of foundation models in different domains, including plaintext generation, software applications, and art generation, and provide concrete examples to illustrate their potential applications. We further discuss the limitations of foundation models, particularly in terms of output quality, coherence, generation of up-to-date content, multilingual capabilities, and associated costs. Moreover, we critically examine the concerns associated with foundation models, including misuse in general and academic settings, bias, accessibility, copyright, consent, and data privacy. To mitigate the risks and challenges posed by foundation models, we propose a range of mitigations. These include adapting foundation models for detection of misuse, reform of educational processes, the inclusion of diverse actors during model training and evaluation, investment and development in public AI infrastructure, the establishment of reasonable laws surrounding AI-generated works, better adherence to data privacy laws, as well as the development of proper cloaking and attribution systems for AI generated artwork. Overall, foundation models hold significant potential for transformative breakthroughs in AI technology, but their responsible development, deployment, and use require careful consideration of their limitations and implementation of effective mitigation strategies.