_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: phishing


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: phishing

Bibliography items where occurs: 27
Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
Generation-related
References


A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
3 Vulnerabilities, Attack, and Limitations


Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 LLM-based penetration testing


The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
3 Benefits of AI use in the finance sector


Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
V. Potential harms and mitigation


Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
III. The rise of large AI models


Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
References


Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
3 Results
References


The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
References


Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
III. The AI-Enhanced CTI Processing Pipeline


Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Attacking GenAI
3 Cyber Offense
References


Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
Anticipating Benefits and Risks of Generative Ghosts


Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
II. Literature Review


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
References


The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
2 Audit the process, not just the product


Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
VI. Case Study


A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
IV. Network Security
VI. Awareness
VIII. Ethical LLMs
IX. Challenges and Open Problems
References


Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction


The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
3 Potential Misuse and Security Concerns


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
II. UNDERSTANDING GENAI SECURITY


Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
IV. Proposing an Alternative 3C Framework


Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
5 Assurance and Alignment for AGI


Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models
References
A Appendix


Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
10 Transparency and Explainability (T)
11 Truthfulness (TR)


Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Related Work
III. Generative AI
IV. Attack Methodology
V. Conclusion
References


Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
5 Trustworthy and Responsible AI in Human-centric Applications


DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
3 Data Details
4 LLM Services (Infrastructure)
10 Appendix