Kimberly A. Cornell’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Figure 1: Employee Actions & LLM Risks By Authors
Figure 4: Generation of spear-phishing email from an employee or colleague
Figure 8: This is a simple trojan that listens for incoming connections and executes any data received from the client.
Figure 9: This example demonstrates how steganography can be used to hide sensitive data within an image file and exfiltrate it to a remote server.
Figure 10: This code demonstrates how malware can be developed to persistently run on a system, hiding its presence using rootkit techniques.

+2

Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI
  • Preprint
  • File available

December 2024

·

43 Reads

Lakshika Vaishnav

·

Sakshi Singh

·

Kimberly A. Cornell

This paper investigates the impacts of the rapidly evolving landscape of generative Artificial Intelligence (AI) development. Emphasis is given to how organizations grapple with a critical imperative: reevaluating their policies regarding AI usage in the workplace. As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency. Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny. These lesser-known models have received limited scholarly analysis and may lack comprehensive restrictions and safeguards. Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship. Generative AI chatbots are increasingly prevalent, but cybersecurity risks have also escalated. Organizations must carefully navigate this evolving landscape by implementing transparent AI usage policies. Frequent training and policy updates are essential to adapt to emerging threats. Insider threats, whether malicious or unwitting, continue to pose one of the most significant cybersecurity challenges in the workplace. Our research is on the lesser-known publicly accessible LLMs and their implications for workplace policies. We contribute to the ongoing discourse on AI ethics, transparency, and security by emphasizing the need for well-thought-out guidelines and vigilance in policy maintenance.

Download