if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: exposing
Bibliography items where occurs: 72
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Appendix
- Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Conclusion
- Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Part 5 - 1 Authorship and Ownership of AI-generated Works of Artt
- EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 3 Dataset Construction
Appendix - Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 4 Taxonomy of AI Privacy Risks
- The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 5 Conclusion
- A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- 5. Improving fairness, accountability, transparency, and ethics
- Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
- Trust and AI Ethics Principles
- First, Do No Harm:
Algorithms, AI, and Digital Product Liability
Managing Algorithmic Harms Though Liability Law and Market Incentives / 2311.10861 / ISBN:https://doi.org/10.48550/arXiv.2311.10861 / Published by ArXiv / on (web) Publishing site
- Preface
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 2 Privacy and data protection
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 2 Risks of Misuse for Artificial Intelligence in
Science
- Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 3 Five specific goals and action-guiding strategies for ethical AI use in research practices
- Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 4 Tools and Evaluation Metrics
- Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 4. Findings
- Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- VI. Human Dynamics
VIII. Conclusion - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- A Appendix
- Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
- 4 RQ2: How Do AI Ethics Discussions Unfold while Playing a Game
Oriented toward Speculative Critique?
- AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance / 2404.14660 / ISBN:https://doi.org/10.48550/arXiv.2404.14660 / Published by ArXiv / on (web) Publishing site
- 1 Technical assessments require an AI expert to complete —
and we don’t have enough experts
- Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
- 3 Method
- The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- IV. Network Security
- The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
- 3. Discussion
- Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 4 Geo-Privacy and Privacy-preserving Measures
- Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 5 Discussion and further research
- The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Potential Misuse and Security Concerns - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
- III. Federated LLMs for Smarm Intelligence
- A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 5 Model audits
- Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 2 The evolution of auditing as a governance mechanism
- Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 2. Threat Model for Honest Computing
4. Discussion - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 4. Experiment Results
- Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- Tables
- Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 8 Safety and Robustness (SR)
- Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 6 Discussions of Current Studies
- Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Concluding remarks
- Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools / 2409.11489 / ISBN:https://doi.org/10.48550/arXiv.2409.11489 / Published by ArXiv / on (web) Publishing site
- Appendix A Technical and Contextual Details for Collaborative Decentralized Cold Supply Chains
- XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
- 4 Experiments
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 5. Analysis and Discussion
- Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
- The emerging social impacts of ChatGPT
- A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
- 3. XR Applications: Expanding Multimodal Interactions Across Domains
- GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
- V. Edge-Cloud Intelligencve Empowered P2VAD
VII. Discussion - Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
- 6 Future Directions & Challenges
- Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
- III. Theoretical Perspectives
- Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
- III. Autonomous Vehicle Cybersecurirty Attacks
- Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions / 2412.20564 / ISBN:https://doi.org/10.48550/arXiv.2412.20564 / Published by ArXiv / on (web) Publishing site
- 3 The Psychology of Confiding and Self-Disclosure
- Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
- V. Discussion and Synthesis
- Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) / 2501.11705 / ISBN:https://doi.org/10.48550/arXiv.2501.11705 / Published by ArXiv / on (web) Publishing site
- Ethical Issues in Context
- Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv.2502.03826 / Published by ArXiv / on (web) Publishing site
- 5. Experimental Protocol
- Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
- 2 Vision Foundation Model Safety
3 Large Language Model Safety
5 Vision-Language Model Safety
8 Open Challenges - Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
- Section 3: Considerations and Future Directions for AI Governance and Design
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 5 Benchmarking Text-to-Image Models
10 Further Discussion - Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
- 3. Results
4. Discussion - Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
- 5. Conclusion
- Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework / 2503.09969 / ISBN:https://doi.org/10.48550/arXiv.2503.09969 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction - Synthetic Data for Robust AI Model Development in Regulated Enterprises / 2503.12353 / ISBN:https://doi.org/10.48550/arXiv.2503.12353 / Published by ArXiv / on (web) Publishing site
- Case Studies in Regulated
Industries
- Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
- 1 Motivation
3 Arguments pro Transparent CoT - A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
- Abstract
Literature Review
Step-Around Prompting: A Research Tool and Potential Threat - Three Kinds of AI Ethics / 2503.18842 / ISBN:https://doi.org/10.48550/arXiv.2503.18842 / Published by ArXiv / on (web) Publishing site
- 5. Discussion
- Systematic Literature Review: Explainable AI Definitions and Challenges in Education / 2504.02910 / ISBN:https://doi.org/10.48550/arXiv.2504.02910 / Published by ArXiv / on (web) Publishing site
- Discussion
- Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
- 5 Open challenges and ways forward for interactive evaluations
- >Publishing site
- Case Studies
- Designing AI-Enabled Countermeasures to Cognitive Warfare / 2504.11486 / ISBN:https://doi.org/10.48550/arXiv.2504.11486 / Published by ArXiv / on (web) Publishing site
- 3.0 AI-Enabled Cognitive Warfare
4.0 Functional Analysis - A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption / 2504.19179 / ISBN:https://doi.org/10.48550/arXiv.2504.19179 / Published by ArXiv / on (web) Publishing site
- 5. Tradeoffs between TAI principles and requirements
- From Texts to Shields: Convergence of Large Language Models and Cybersecurity / 2505.00841 / ISBN:https://doi.org/10.48550/arXiv.2505.00841 / Published by ArXiv / on (web) Publishing site
- 4 Socio-Technical Aspects of LLM and Security
- Securing the Future of IVR: AI-Driven Innovation with Agile Security, Data Regulation, and Ethical AI Integration / 2505.01514 / ISBN:https://doi.org/10.48550/arXiv.2505.01514 / Published by ArXiv / on (web) Publishing site
- III. Widget-Based Platforms: Improved Access, New Risks
- Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Methodology - Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks / 2505.13565 / ISBN:https://doi.org/10.48550/arXiv.2505.13565 / Published by ArXiv / on (web) Publishing site
- 4 Risk taxonomy: risks posed by AI to democracy
- Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods / 2505.17870 / ISBN:https://doi.org/10.48550/arXiv.2505.17870 / Published by ArXiv / on (web) Publishing site
- 2 Conceptual Framework: Model Immunization via Quarantined Falsehoods
Appendix - HADA: Human-AI Agent Decision Alignment Architecture / 2506.04253 / ISBN:https://doi.org/10.48550/arXiv.2506.04253 / Published by ArXiv / on (web) Publishing site
- 2 Related Work: Emerging LLM Software Agents
3 Design & Development