if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: exacerbated
Bibliography items where occurs: 70
- Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- III. Interactions between Aspects
- Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
- 2 Clarifying Terminologies of Article-5: Insights from Behavioral Economics and Psychology
- The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
- 4 Integrating red teaming, blue teaming, and ethics with violet teaming
- FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 2. Fairness - For Equitable AI in Medical Imaging
- Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Method
4 Taxonomy of AI Privacy Risks
5 Discussion - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- The AI Assessment Scale
- Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 3. The usage of synthetic data
- Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 3 Five specific goals and action-guiding strategies for ethical AI use in research practices
- Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- VII. Discussions
- Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- 5. Open Challenges for Free-Formed AI
Collectives
- Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 3. Findings
- AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
- AI Ethics Development Phases Based on Keyword Analysis
- Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Introduction
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 4 Practical cases of unfairness in real-world setting
- Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- 2 Materials
- Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- Abstract
- Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
- 6. Conclusion
- Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 3 The need to audit AI systems – a confluence of top-down and bottom-up pressures
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework - Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
- 4 Generative AI and Humans: Risks and Mitigation
- Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Introduction
- Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models - Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- I. AI and the Federal Arbitration ACt
- Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- 4. Risks and Caveats
- Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- V. Discussion
- Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
- V. Proof of Concepts 2
- The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
- II. Fundamentals of Diffusion Models and Detection Challenges
- Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. AI chatbots in privacy and ethics research - A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Potential Risks and Ethical Challenges of XR and the Metaverse
5. General Discussion - I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models
/ 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Appendices
- Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries / 2409.12197 / ISBN:https://doi.org/10.48550/arXiv.2409.12197 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Persuasion with Large Language Models: a Survey / 2411.06837 / ISBN:https://doi.org/10.48550/arXiv.2411.06837 / Published by ArXiv / on (web) Publishing site
- 5 Ethical Considerations
- Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
- IV. Current App Frameworks and AI Integration
- Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
- 6. Ethical Concerns and Legal Challenges
- Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
- VII. Discussion
- Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related Works - Human-centred test and evaluation of military AI / 2412.01978 / ISBN:https://doi.org/10.48550/arXiv.2412.01978 / Published by ArXiv / on (web) Publishing site
- Full Summary
- Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
- Introduction and Motivation
Core Ethical Challenges - Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes / 2412.04796 / ISBN:https://doi.org/10.48550/arXiv.2412.04796 / Published by ArXiv / on (web) Publishing site
- I. Key Concerns on AI and Employee Well-Being
- Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
- 7 Agent Safety
- Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv.2501.10453 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv.2501.10685 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv.2501.16531 / Published by ArXiv / on (web) Publishing site
- 5. Findings
6. Discussion - A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and its Emergence in University-Level Academic Writing / 2502.05698 / ISBN:https://doi.org/10.48550/arXiv.2502.05698 / Published by ArXiv / on (web) Publishing site
- GenAI-induced Cognitive Dissonance: A Hypothetical Construct
- Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv.2502.07254 / Published by ArXiv / on (web) Publishing site
- Paper
- Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
- 2 Failure Modes
3 Risk Factors
4 Implications - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 5 Benchmarking Text-to-Image Models
- Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
- II. Background and Challenges
- Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives / 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / on (web) Publishing site
- 5 Policymakers
- Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions / 2503.00940 / ISBN:https://doi.org/10.48550/arXiv.2503.00940 / Published by ArXiv / on (web) Publishing site
- 4 Main Findings and Themes
- Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
- Abstract
- Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
- 4 Discussion and Conclusion
- Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
- 2 LLM Hallucinations in Medicine
4 Detection and Evaluation of Medical Hallucinations - Mapping out AI Functions in Intelligent Disaster (Mis)Management and AI-Caused Disasters / 2502.16644 / ISBN:https://doi.org/10.48550/arXiv.2502.16644 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Intelligent Disaster Mismanagement (IDMM) - Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy / 2503.14539 / ISBN:https://doi.org/10.48550/arXiv.2503.14539 / Published by ArXiv / on (web) Publishing site
- Introduction
- HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
- Abstract
- Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
/ 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
- 2. Related Work
4. Taxonomy of AI Privacy and Ethical Incidents
5. Discussion - Systematic Literature Review: Explainable AI Definitions and Challenges in Education / 2504.02910 / ISBN:https://doi.org/10.48550/arXiv.2504.02910 / Published by ArXiv / on (web) Publishing site
- Technical (CH3)
Discussion - AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias / 2407.19655 / ISBN:https://doi.org/10.48550/arXiv.2407.19655 / Published by ArXiv / on (web) Publishing site
- 2 Fairness Concerns in Healthcare
- A Comprehensive Survey on Integrating Large Language Models with Knowledge-Based Methods / 2501.13947 / ISBN:https://doi.org/10.48550/arXiv.2501.13947 / Published by ArXiv / on (web) Publishing site
- 5. Integrating LLMs with knowledge bases
- Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
- 3. Confirmation Bias in Generative AI Chatbots
4. Mechanisms of Confirmation Bias in Chatbot Architectures
5. Risks and Ethical Implications - Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- 5. Conclusion
5 Risks and Challenges of AI Awareness - AI Awareness / 2504.20084 / ISBN:https://doi.org/10.48550/arXiv.2504.20084 / Published by ArXiv / on (web) Publishing site
- 5 Risks and Challenges of AI Awareness
- Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
- 5 Further Discussions
- AI Literacy for Legal AI Systems: A practical approach / 2505.18006 / ISBN:https://doi.org/10.48550/arXiv.2505.18006 / Published by ArXiv / on (web) Publishing site
- 5. Legal AI Systems Risk Assessment