Question
Asked 14 June 2024
  • Hope Africa University

What are novel cryptographic protocols that enhance email security?

what are novel cryptographic protocols that enhance email security?

All Answers (2)

Wasswa Shafik
Dig Connectivity Research Laboratory (DCRLab)
Novel cryptographic protocols enhancing email security include end-to-end encryption methods like Pretty Good Privacy (PGP) and S/MIME, which ensure that only intended recipients can decrypt the messages. Additionally, protocols such as DKIM (DomainKeys Identified Mail), SPF (Sender Policy Framework), and DMARC (Domain-based Message Authentication, Reporting, and Conformance) help authenticate the sender's domain, reducing email spoofing and phishing attacks. Emerging protocols like Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 4.0 and the use of elliptic curve cryptography (ECC) offer stronger encryption with reduced computational overhead. Post-quantum cryptography is also being explored to safeguard email communications against future quantum computing threats, providing long-term security assurances.
You may want to checkout the following references:
1. "Pretty Good Privacy: A Seminar Report on PGP" - This paper provides an in-depth analysis of PGP, its cryptographic mechanisms, and its role in securing email communication.
2. "Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 4.0 Certificate Handling" - This RFC document outlines the enhancements in S/MIME v4.0, emphasizing secure email communication.
3. "Post-Quantum Cryptography for Long-Term Security" - This paper explores the application of post-quantum cryptographic algorithms to future-proof email encryption against quantum computing threats.
4. "DMARC: A New Tool to Fight Email Phishing" - This study investigates the efficacy of DMARC in preventing email spoofing and phishing attacks, detailing its implementation and impact.
5. "Elliptic Curve Cryptography in Practice" - This paper discusses the practical applications of ECC in securing digital communications, including its use in email security protocols.
These papers provide a comprehensive overview of the current advancements and research in email security through cryptographic protocols.
I hope this help.
Shafik
2 Recommendations
Murtadha Shukur
Al-Furat Al-Awsat Technical University
Email security has limitations with traditional protocols. While advancements are ongoing, here are some novel cryptographic approaches to improve email security:
  • Attribute-Based Encryption (ABE): This allows fine-grained access control on emails. Instead of a single key for all recipients, ABE assigns keys based on pre-defined attributes (e.g., department, project). Only users with the matching attributes can decrypt the email.
  • Homomorphic Encryption: This enables searching on encrypted emails. With homomorphic encryption, you can search keywords on the encrypted message itself without decrypting it. This is particularly useful for secure email servers where content needs to be scanned for malicious content without compromising privacy.
  • Post-Quantum Cryptography (PQC): This is a new area of cryptography that addresses the vulnerability of traditional algorithms to future quantum computers. PQC algorithms are resistant to attacks by quantum computers, making them a good choice for securing future email communication.
  • Zero-Knowledge Proofs (ZKPs): ZKPs allow proving a statement to be true without revealing the underlying information. In emails, this can be used to prove a sender's identity without revealing their private key. This enhances security while maintaining user privacy.
It's important to note that these are evolving areas of research, and some protocols might not be ready for widespread adoption yet. However, they represent promising directions for improving email security in the future.
2 Recommendations

Similar questions and discussions

[Call for paper]IEEE 2025 5th International Conference on Electronics, Circuits and Information Engineering (ECIE 2025)| Guangzhou,China
Discussion
2 replies
  • Kiuling LaiKiuling Lai
IEEE 2025 5th International Conference on Electronics, Circuits and Information Engineering (ECIE 2025)will be held in Guangzhou,China during May 23-25, 2025.
---Call for papers---
The topics of interest include, but are not limited to:
· Wireless Systems
· Microwave Electronics
· Electromagnetics
· Electric vehicles, vehicular electronics and intelligent transportation
· Emerging technologies, 5G and beyond
· IoV, IoT, M2M, sensor networks, and Ad-Hoc networking
· Machine learning and optimization for wireless systems
· Positioning, navigation, and mobile satellite systems
· Radio access technology and heterogeneous networks
· Spectrum management for communications
· Reconfigurable intelligent surfaces and smart environments
· Unmanned aerial vehicle communications, vehicular networks, and telematics
· Wireless networks: protocols, security and services
· Cognitive radio and AI-enabled networks
· Communication and information system security
......
---Publication---
Submitted paper will be peer reviewed by conference committees, and accepted papers after registration and presentation will be published in the IEEE(ISBN: 979-8-3315-1401-3), which will be submitted for indexing by IEEE Xplore, EI Compendex, Scopus.
---Important Dates---
Full Paper Submission Date: April 22, 2025
Final Paper Submission Date: May 16, 2025
Registration Deadline: May 25, 2025
Conference Dates: May 23-25, 2025
--- Paper Submission---
Please send the full paper(word+pdf) to Submission System:
How to Design Safe AI Agents
Discussion
3 replies
  • Saikat BaruaSaikat Barua
The rapid advancement of artificial intelligence (AI) presents unprecedented opportunities across diverse fields, from healthcare and scientific discovery to transportation and entertainment. However, this progress is accompanied by growing concerns regarding the safety and reliability of these increasingly sophisticated systems [4, 6]. As AI agents become more autonomous and capable of complex actions, the potential for unintended consequences, misuse, and even catastrophic risks necessitates a concerted effort to design and deploy AI systems that are aligned with human values and societal well-being [2, 6]. This review synthesizes recent research on the design of safe AI agents, exploring key themes such as human-centered design, risk alignment, socioaffective considerations, and the development of robust safety mechanisms. We examine technical approaches, ethical considerations, and the crucial role of human oversight in ensuring the responsible development and deployment of AI agents.
Human-Centered Design and the Role of Human Agency
One prominent approach to designing safe AI agents centers on understanding and incorporating principles of human behavior and cognition [1]. This human-centered approach recognizes that AI systems are ultimately designed to interact with and impact human users, and therefore, their design must reflect and respect human values, goals, and limitations [3].
A key aspect of human-centered design involves drawing inspiration from human self-regulation and goal-setting processes [1]. Humans, as autonomous intelligent agents, have developed sophisticated mechanisms for navigating complex environments and achieving their objectives while minimizing harm. By studying these mechanisms, researchers aim to identify principles that can be translated into the design of safe AI agents [1]. This includes understanding how humans monitor their own actions, adapt to changing circumstances, and prioritize safety and well-being. The development of AI agents that can monitor their own actions, adapt to changing circumstances, and prioritize safety and well-being is paramount [1].
The relationship between humans and AI systems is not solely transactional [3]. As AI agents become more integrated into our lives, they can generate the perception of deeper relationships with users, especially as AI becomes more personalized and agentic. This shift, from transactional interaction to ongoing sustained social engagement with AI, necessitates a new focus on socioaffective alignment-how an AI system behaves within the social and psychological ecosystem co-created with its user, where preferences and perceptions evolve through mutual influence [3]. This involves resolving key intrapersonal dilemmas [3].
In addition to incorporating human behavioral principles, ensuring human agency is a core element of safe AI design [10]. As AI systems become more capable, there is a risk that they could undermine or even usurp human control [10]. AI systems can reshape human intention, and humans lack biological and psychological mechanisms that protect humans from loss of agency [10]. Therefore, it is critical to design AI systems that preserve and enhance human agency, rather than diminish it [10]. This requires careful consideration of how AI systems interact with humans, how they influence decision-making, and how they can be designed to support human autonomy and control [10].
One approach to preserving human agency involves the development of human-AI copilot systems, where human experts can intervene in the training and operation of AI agents [14]. Human intervention is an effective way to inject human knowledge into the training loop of reinforcement learning, which can bring fast learning and ensured training safety [14]. These systems allow humans to provide guidance, correct errors, and ensure that the AI agent remains aligned with human values and goals [14].
The design of AI agents that account for human beliefs about their intentions can also improve human-AI collaboration [18]. A limitation of most existing approaches is their assumption that human behavior remains static, regardless of the AI agent's actions [18]. In reality, humans may adjust their actions based on their beliefs about the AI's intentions, specifically, the subtasks they perceive the AI to be attempting to complete based on its behavior [18]. By incorporating a model of human beliefs, AI agents can better understand and respond to human needs, leading to more effective collaboration [18].
Risk Alignment in Agentic AI Systems
As AI agents become more autonomous, the concept of risk alignment becomes increasingly important [2]. Risk alignment involves ensuring that AI systems have appropriate attitudes toward risk, aligning with the risk preferences of their users and society more broadly [2]. AIs with reckless attitudes toward risk (either because they are calibrated to reckless human users or are poorly designed) may pose significant threats [2].
One key area of research focuses on defining and measuring risk attitudes in AI systems [2]. This involves identifying the factors that influence an agent's decision-making under uncertainty, such as its goals, its knowledge, and its perception of the environment [2]. Researchers are exploring how to calibrate AI systems to the risk attitudes of their users, taking into account individual differences in risk tolerance [2].
Another important aspect of risk alignment is the development of guardrails and safety mechanisms to prevent AI agents from engaging in risky or harmful behaviors [2]. These guardrails can take various forms, including constraints on the agent's actions, monitoring systems to detect and correct undesirable behaviors, and mechanisms for human oversight and intervention [1, 8].
The ethical considerations involved in designing AI systems that make risky decisions on behalf of others are also paramount [2]. This includes questions of responsibility, accountability, and the potential for bias [2]. It is crucial to ensure that AI systems are designed in a way that is fair, transparent, and accountable, and that they do not exacerbate existing social inequalities [2].
Socioaffective Alignment and the Dynamics of Human-AI Relationships
The development of AI agents that can engage in sustained social interactions with humans necessitates a focus on socioaffective alignment [3]. This involves understanding how AI systems can behave within the social and psychological ecosystem co-created with their users [3].
Addressing these dynamics involves resolving key intrapersonal dilemmas, including balancing immediate versus long-term well-being, protecting autonomy, and managing AI companionship alongside the desire to preserve human social bonds [3]. By framing these challenges through a notion of basic psychological needs, we seek AI systems that support, rather than exploit, our fundamental nature as social and emotional beings [3].
One area of research focuses on the design of conversational agents (CAs) that can provide support and guidance to users, particularly in sensitive areas such as mental and sexual health [5]. However, the use of CAs in these contexts raises safety concerns, as adolescents are increasingly using CAs for interactive knowledge discovery on sensitive topics [5]. Unintended risks have been documented with adolescents' interactions with AI-based CAs, such as being exposed to inappropriate content, false information, and/or being given advice that is detrimental to their mental and physical well-being [5]. Therefore, it is critical to establish guardrails for the safe evolution of AI-based CAs for adolescents [5].
The perception of AI agents can also be influenced by the way they are presented to users [11]. For example, studies have shown that students' perceptions of conversational agents like Alexa can change through programming their own conversational agents in week-long AI education workshops [11]. Students felt Alexa was more intelligent and felt closer to Alexa after the workshops [11]. These findings highlight the importance of careful consideration of personification, transparency, playfulness, and utility when designing CAs for learning contexts [11].
Technical Approaches to Safe AI Design
A variety of technical approaches are being explored to ensure the safety and reliability of AI agents. These approaches can be broadly categorized into several areas, including:
  • Value Alignment: Ensuring that AI systems are aligned with human values and goals is a central challenge in AI safety [4]. This involves specifying what humans want from AI systems and ensuring that AI systems act in accordance with these specifications [4]. This is a complex task because human values can be ambiguous, context-dependent, and may even vary among individuals and over time [4].
  • Robustness and Reliability: AI systems must be able to operate reliably in a variety of environments and under a range of conditions [8]. This requires developing AI systems that are robust to noise, uncertainty, and adversarial attacks [8].
  • Explainability and Interpretability: It is important to understand how AI systems make decisions [16]. This involves developing AI systems that are explainable and interpretable, so that users can understand the reasoning behind the system's actions [16].
  • Verification and Validation: Rigorous testing, verification, and validation are essential to ensure that AI systems meet safety requirements [8]. This includes developing formal methods for verifying the correctness of AI systems and testing them in a variety of simulated and real-world environments [8].
  • Control and Constraint: AI systems can be designed with built-in mechanisms to control their behavior and prevent them from causing harm [8]. This includes the use of constraints on the agent's actions, reward shaping, and other techniques to guide the agent's behavior [8].
  • Monitoring and Auditing: Continuous monitoring and auditing of AI systems can help detect and correct unexpected or undesirable behaviors [8]. This involves the use of sensors, logs, and other data sources to track the agent's actions and identify potential problems [8].
  • Red Teaming and Adversarial Testing: Red teaming involves the use of adversarial test to identify vulnerabilities in AI systems and to test the effectiveness of safety mechanisms [8]. This approach is often used to simulate attacks on AI systems and to assess their resilience [8].
One emerging paradigm for safe AI deployment is structured access [17]. Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems [17]. The aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely [17].
Another approach involves focusing on non-agentic AI systems, such as Scientist AI [6]. This system is designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans [6]. It comprises a world model that generates theories to explain data and a question-answering inference machine [6]. Both components operate with an explicit notion of uncertainty to mitigate the risks of overconfident predictions [6].
The Role of Multi-Agent Systems and Cooperation
Many real-world problems involve multiple AI agents interacting with each other and with human users [7]. Designing safe and effective multi-agent systems is therefore a critical area of research [7].
One challenge in multi-agent systems is ensuring that AI agents can cooperate and coordinate their actions to achieve common goals [13]. The tragedy of the commons illustrates a fundamental social dilemma where individual rational actions lead to collectively undesired outcomes, threatening the sustainability of shared resources [13]. AI agents can be leveraged to enhance cooperation in public goods games, moving beyond traditional regulatory approaches to using AI as facilitators of cooperation [13].
Another challenge is ensuring that AI agents can coexist and evolve harmoniously in shared environments without creating chaos [7]. Existing multi-agent frameworks, such as multi-agent systems and game theory, are largely limited to predefined rules and static objective structures [7]. A shift toward the emergent, self-organizing, and context-aware nature of these systems is needed [7].
Ethical Considerations and Governance
The development and deployment of safe AI agents raise a number of ethical considerations [2]. These include questions of fairness, transparency, accountability, and the potential for bias [2].
One key ethical challenge is ensuring that AI systems are fair and do not discriminate against any group or individual [2]. This requires careful attention to the data used to train AI systems, as well as the algorithms and models used to make decisions [2].
Transparency is also essential for building trust in AI systems [16]. Users need to understand how AI systems work, how they make decisions, and what data they are using [16]. This requires developing AI systems that are explainable and interpretable, as well as providing users with clear and accessible information about the system's behavior [16].
Accountability is another critical ethical consideration [4]. When AI systems cause harm, it is important to establish who is responsible and how they can be held accountable [4]. This requires clear lines of responsibility, as well as mechanisms for redress and compensation [4].
The potential for bias in AI systems is a significant concern [2]. AI systems can inherit biases from the data they are trained on, or from the algorithms and models used to make decisions [2]. This can lead to unfair or discriminatory outcomes, particularly for marginalized groups [2]. Therefore, it is crucial to develop methods for detecting and mitigating bias in AI systems [2].
Governance is essential for ensuring the responsible development and deployment of AI agents [15]. This includes establishing standards, regulations, and best practices for the design, development, and use of AI systems [15]. It also includes creating mechanisms for oversight and accountability, such as independent review boards and regulatory agencies [15].
One approach to AI governance involves progressive decentralization [15]. Current approaches to AI governance often fall short in anticipating a future where AI agents manage critical tasks [15]. Cryptoeconomic incentives can help design decentralized governance systems that allow AI agents to autonomously interact and exchange value while ensuring human oversight via progressive decentralization [15].
Research Gaps and Future Directions
Despite significant progress in the design of safe AI agents, several research gaps and challenges remain. These include:
  • Formalizing and Operationalizing Values: Translating human values into formal specifications that can be used to guide the design of AI systems is a major challenge [4]. This requires developing methods for capturing and representing complex human values in a way that is both precise and comprehensive [4].
  • Developing Robust Safety Mechanisms: The development of robust safety mechanisms that can prevent AI agents from causing harm in a wide range of environments and under a variety of conditions is a critical area of research [8]. This requires developing techniques for detecting and correcting unexpected or undesirable behaviors, as well as methods for ensuring that AI systems are resilient to adversarial attacks [8].
  • Addressing the Challenges of Multi-Agent Systems: The design of safe and effective multi-agent systems presents a number of unique challenges [7]. This includes ensuring that AI agents can cooperate and coordinate their actions, as well as preventing them from engaging in harmful or undesirable behaviors [7].
  • Understanding and Mitigating Bias: Bias in AI systems is a significant concern [2]. Further research is needed to understand the sources of bias in AI systems, as well as to develop methods for detecting and mitigating bias [2].
  • Improving Explainability and Interpretability: The development of AI systems that are explainable and interpretable is essential for building trust and ensuring accountability [16]. More research is needed to develop techniques for explaining the reasoning behind AI system's decisions in a way that is understandable to human users [16].
  • Advancing Socioaffective Alignment: As AI agents become more integrated into our lives, understanding how to ensure socioaffective alignment will be critical [3]. More work is needed to understand the dynamics of human-AI relationships and to design AI systems that support human well-being and autonomy [3].
  • Creating Incentives for Safe AI Development: One key area where there are currently no or few papers focuses on incentives that AI companies have to research safe AI development [12]. Approaches like model organisms of misalignment, multi-agent safety, and safety by design may be slow to progress without funding or efforts from government, civil society, philanthropists, or academia [12].
Looking ahead, the future of safe AI design will likely involve a multi-faceted approach that combines technical innovation with ethical considerations and robust governance frameworks. This will require collaboration among researchers, developers, policymakers, and the public to ensure that AI systems are designed and deployed in a way that benefits society as a whole.
Future research should focus on developing more sophisticated methods for value alignment, robust safety mechanisms, and methods for ensuring the fairness, transparency, and accountability of AI systems [4, 8, 2]. It will also be essential to develop governance frameworks that can adapt to the rapid pace of AI innovation and that can effectively address the ethical challenges posed by these technologies [15].
==================================================
References
  1. Mark Muraven. Designing a Safe Autonomous Artificial Intelligence Agent based on Human Self-Regulation. arXiv:1701.01487v1 (2017). Available at: http://arxiv.org/abs/1701.01487v1
  2. Hayley Clatterbuck, Clinton Castro, Arvo Muñoz Morán. Risk Alignment in Agentic AI Systems. arXiv:2410.01927v1 (2024). Available at: http://arxiv.org/abs/2410.01927v1
  3. Hannah Rose Kirk, Iason Gabriel, Chris Summerfield, Bertie Vidgen, Scott A. Hale. Why human-AI relationships need socioaffective alignment. arXiv:2502.02528v1 (2025). Available at: http://arxiv.org/abs/2502.02528v1
  4. Deven R. Desai, Mark O. Riedl. Responsible AI Agents. arXiv:2502.18359v1 (2025). Available at: http://arxiv.org/abs/2502.18359v1
  5. Jinkyung Park, Vivek Singh, Pamela Wisniewski. Toward Safe Evolution of Artificial Intelligence (AI) based Conversational Agents to Support Adolescent Mental and Sexual Health Knowledge Discovery. arXiv:2404.03023v1 (2024). Available at: http://arxiv.org/abs/2404.03023v1
  6. Yoshua Bengio, Michael Cohen, Damiano Fornasiere, Joumana Ghosn, Pietro Greiner, Matt MacDermott, Sören Mindermann, Adam Oberman, Jesse Richardson, Oliver Richardson, Marc-Antoine Rondeau, Pierre-Luc St-Charles, David Williams-King. Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?. arXiv:2502.15657v2 (2025). Available at: http://arxiv.org/abs/2502.15657v2
  7. Hepeng Li, Yuhong Liu, Jun Yan. Position: Emergent Machina Sapiens Urge Rethinking Multi-Agent Paradigms. arXiv:2502.04388v1 (2025). Available at: http://arxiv.org/abs/2502.04388v1
  8. Tomek Korbak, Joshua Clymer, Benjamin Hilton, Buck Shlegeris, Geoffrey Irving. A sketch of an AI control safety case. arXiv:2501.17315v1 (2025). Available at: http://arxiv.org/abs/2501.17315v1
  9. Zhe Su, Xuhui Zhou, Sanketh Rangreji, Anubha Kabra, Julia Mendelsohn, Faeze Brahman, Maarten Sap. AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents. arXiv:2409.09013v1 (2024). Available at: http://arxiv.org/abs/2409.09013v1
  10. Catalin Mitelut, Ben Smith, Peter Vamplew. Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety. arXiv:2305.19223v1 (2023). Available at: http://arxiv.org/abs/2305.19223v1
  11. Jessica Van Brummelen, Viktoriya Tabunshchyk, Tommy Heng. "Alexa, Can I Program You?": Student Perceptions of Conversational Artificial Intelligence Before and After Programming Alexa. arXiv:2102.01367v1 (2021). Available at: http://arxiv.org/abs/2102.01367v1
  12. Oscar Delaney, Oliver Guest, Zoe Williams. Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis. arXiv:2409.07878v2 (2024). Available at: http://arxiv.org/abs/2409.07878v2
  13. Arend Hintze, Christoph Adami. Promoting Cooperation in the Public Goods Game using Artificial Intelligent Agents. arXiv:2412.05450v1 (2024). Available at: http://arxiv.org/abs/2412.05450v1
  14. Quanyi Li, Zhenghao Peng, Bolei Zhou. Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization. arXiv:2202.10341v1 (2022). Available at: http://arxiv.org/abs/2202.10341v1
  15. Tomer Jordi Chaffer. Governing the Agent-to-Agent Economy of Trust via Progressive Decentralization. arXiv:2501.16606v1 (2025). Available at: http://arxiv.org/abs/2501.16606v1
  16. Ajay Vishwanath, Einar Duenger Bøhn, Ole-Christoffer Granmo, Charl Maree, Christian Omlin. Towards Artificial Virtuous Agents: Games, Dilemmas and Machine Learning. arXiv:2208.14037v3 (2022). Available at: http://arxiv.org/abs/2208.14037v3
  17. Toby Shevlane. Structured access: an emerging paradigm for safe AI deployment. arXiv:2201.05159v2 (2022). Available at: http://arxiv.org/abs/2201.05159v2
  18. Guanghui Yu, Robert Kasumba, Chien-Ju Ho, William Yeoh. On the Utility of Accounting for Human Beliefs about AI Intention in Human-AI Collaboration. arXiv:2406.06051v2 (2024). Available at: http://arxiv.org/abs/2406.06051v2
Got a technical question?
Get high-quality answers from experts.