if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: attacker
Bibliography items where occurs: 64
- AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- II. Underlying Aspects
- Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Perturbation
Generation-related
Bias and Discrimination of Training Data
Conclusion - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 3 Vulnerabilities, Attack, and Limitations
5 Falsification and Evaluation
6 Verification
8 Regulations and Ethical Use - Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- 6 Final ethical considerations
- The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
- 10 Supplemental & additional details
- Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- IV. Attack Surfaces
- Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 4 Taxonomy of AI Privacy Risks
- How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
- V. Potential harms and mitigation
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 2 Privacy and data protection
- Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- 7. Evaluation metrics and performance benchmarks
9. Conclusion - MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- V. Evaluation
- Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 3 Detection
- Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's CubeĆ / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- 4. Discussion
- POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 4 The POLARIS framework
- Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- IV. Challenges and Considerations
- Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
- 3. Analysis
- Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- 2 Attacking GenAI
3 Cyber Offense
4 Cyber Defence - Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 3. Findings
- Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- Polarised Responses
- AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 5 Governance
- Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 3 LLM Infrastructure
4 LLM Lifecycle - Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
- III. Adversarial of AIGC Models in 6G Network
IV. Privacy of AIGC in 6G Network - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- II. Threat Intelligence
III. Vulnerability Assessment
IV. Network Security
IX. Challenges and Open Problems - Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 3 Secure AI in EO: Focusing on Defense Mechanisms, Uncertainty Modeling and
Explainability
- Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 3 Experimental Design, Overview, and Discussion
- The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- 3 Strategies in Securing Large Language
models
- SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
- IV. SECGENAI FRAMEWORK REQUIREMENTS SPECIFICATIONS
REFERENCES - Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- II. Global Divide in AI Regulation: Horizontally. Context-Specific
- Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Introduction
- Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 2. Threat Model for Honest Computing
3. Honest Computing reference specifications - Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
- 5. Deepfakes Detection Method on Realistic Scenarios
- Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- V. Challenges and Risks
- CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Background and Related Works
3. Methodology
4. Experiment Results - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- I. Introduction
IV. Attack Methodology
V. Conclusion - Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
- 7 Data Privacy
- Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- IV. Findings and Resultant Themes
- Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Ethics of Resisting LLM Inference
3 Threat Model
4 LLM Adversarial Attacks as LLM Inference Data Defenses
5 Experiments
6 Discussion
7 Conclusion and Limitations - Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
- 3 Methodology PCJAILBREAK
- Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
- III. Jailbreak Attack Methods and Techniques
IV. Defense Mechanisms Against Jailbreak Attacks
VII. Conclusion - Redefining Finance: The Influence of Artificial Intelligence (AI) and Machine Learning (ML) / 2410.15951 / ISBN:https://doi.org/10.48550/arXiv.2410.15951 / Published by ArXiv / on (web) Publishing site
- Future Scope
- Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Ethical Leadership in the Age of AI Challenges, Opportunities and Framework for Ethical Leadership / 2410.18095 / ISBN:https://doi.org/10.48550/arXiv.2410.18095 / Published by ArXiv / on (web) Publishing site
- Ethical Challenges Presented by AI
- Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models
/ 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Appendices
- Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
- VII. Discussion
- AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
- 6 Discussion: Benefits, Risks and Limitations
- AI Ethics in Smart Homes: Progress, User Requirements and Challenges / 2412.09813 / ISBN:https://doi.org/10.48550/arXiv.2412.09813 / Published by ArXiv / on (web) Publishing site
- 5 AI Ethics from Technology's Perspective
6 Challenges - Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
- II. Autonomous Vehicles
III. Autonomous Vehicle Cybersecurirty Attacks
IV. Overview of Threat Modelling
V. Stride & Dread Threat Model for Autonomous Vehicles Architecture - Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
- 4 Robustness to Attack
5 Misuse
8 Interpretability for LLM Safety - Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions / 2412.20564 / ISBN:https://doi.org/10.48550/arXiv.2412.20564 / Published by ArXiv / on (web) Publishing site
- References
- Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Introduction
Jailbreak Evaluation Method
Model Guardrail Enhancemen - Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 Vision Foundation Model Safety
3 Large Language Model Safety
5 Vision-Language Model Safety
6 Diffusion Model Safety
8 Open Challenges - Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 3 Ambiguity and Conflicts in HHH
- Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- References
- Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
- 3 Risk Factors
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 6 Benchmarking Large Language Models
7 Benchmarking Vision-Language Models
9 Trustworthiness in Downstream Applications
10 Further Discussion
References - Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence / 2503.00164 / ISBN:https://doi.org/10.48550/arXiv.2503.00164 / Published by ArXiv / on (web) Publishing site
- 3 The Evolving Threat Landscape
4 Agentic AI and Frontier AI in Cybersecu- rity
6 Threat Intelligence Feeds and Sources in the Era of Frontier AI - Jailbreaking Generative AI: Empowering Novices to Conduct Phishing Attacks / 2503.01395 / ISBN:https://doi.org/10.48550/arXiv.2503.01395 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
- 4 Arguments against Transparent CoT
- Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
- 3 From Legal Frameworks to Technological Solutions