if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: confirm
Bibliography items where occurs: 203
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Chapter 2 Technical Performance
- AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
- 4 Results
- ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
- 4 Deployment and Evaluation
- A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust / 2308.09979 / ISBN:https://doi.org/10.48550/arXiv.2308.09979 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Discussion - Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experiments
5 Conclusions - Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- 2 Related Work on Data Excellence
5 Results - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
- 2 Key AI technology in financial services
- Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Part 4 NFTs and the Future Art Economy
- The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- 3 Legal and Ethical Considerations
D Case Outcome Annotation Instructions
Cambridge Law Corpus: Datasheet - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- VI. Human-Robot Interaction (HRI) Security Studies
- Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 5. Ethical Issues of AI and Robotics in AEC Industry
- Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
- 5. Text negotiations as normative testing
- Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence / 2309.14617 / ISBN:https://doi.org/10.48550/arXiv.2309.14617 / Published by ArXiv / on (web) Publishing site
- Introduction
Results and Discussion - Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 2. Autonomous vehicles
- AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
- 4. Blockchain-based credentialing and certification
6. Blockchain-based decentralized learning networks - Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 3. Proposed Novel Topics in an Ethics of AI Belief
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- FUTURE-AI GUIDELINE
- Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- 3 Agent Benchmark
5 Related Work - Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- E Response Diversity and the Size of the Generating Model
- The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 3 Method
5 Discussion - Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
6 Degree of Responsibility - Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
- Abstract
Appendix A Evaluating Current Practices for Human-Participants Research - Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and
Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
- Research questions
- Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- II. Sources of bias in AI
- She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- 2 Related Works
- Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Experiments - Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
5 Conclusion and Future Work - RAISE -- Radiology AI Safety, an End-to-end lifecycle approach / 2311.14570 / ISBN:https://doi.org/10.48550/arXiv.2311.14570 / Published by ArXiv / on (web) Publishing site
- 2. Pre-Deployment phase
3. Production deployment monitoring phase - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
- V. Technical defense mechanisms
- From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
- System Evaluation and Results
- RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Results & analysis - Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 3 Control the Risks of AI Models in Science
- Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
- Study 1: Geo-cultural Differences in
Offensiveness
Study 2: Moral Foundations of Offensiveness
Moral Factors - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Literature
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- 3 Problem Formulation
6 Discussion - Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- IV. Results
V. Discussion
Appendix - Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Results
Supplementary Material - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- 3. Autonomous threat hunting: conceptual framework
- MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- Abstract
IV. System design
V. Evaluation
VI. Discussion and future work
VII. Conclusion - AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- IV. Results
V. Discussion and suggestions - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- II Mitigating bias - 5 Fairness mitigation
9 Towards fairness through time - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- 2 Related work
- Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 3. Research study
- Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- C. ROSE: Tool and Data ResOurces to Explore the Instability of SEntiment Analysis Systems
- POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 3 State of the practice
6 Limitations
7 Conclusion - Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- 2. Background
6. Compliance with International Regulations - I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- B Experimental Settings & Results
- What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- A Appendix
- FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 4. Results
- Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- III. The AI-Enhanced CTI Processing Pipeline
IV. Challenges and Considerations - Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- 3 Cyber Offense
4 Cyber Defence
5 Implications of Generative AI in Social, Legal, and Ethical Domains - Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 3 Method
- The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- 6 Bias and Fairness
- Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- 2 Applications of Large Language Models in Legal Tasks
- Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- 4 Findings
- AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 3 Learning under Distribution Shift
4 Assurance - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- Abstract
- Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- 2 EU Public Policy Analysis
4 European Union Artificial Intelligence Act - Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- Abstract
4 Discussin and Implications
6 Conclusion & Future Work - Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
- 3 Case Study #1: Linguistic Features of Emotion
- Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
- 5 Analyses of the Design Process
8 Limitations and Future Work - XXAI: Towards eXplicitly eXplainable Artificial Intelligence / 2401.03093 / ISBN:https://doi.org/10.48550/arXiv.2401.03093 / Published by ArXiv / on (web) Publishing site
- 4. Discussion of the problems of symbolic AI and ways to overcome them
5. Conclusions and prospects - A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- 2 Materials
- Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- 3 Pilot-testing: UN Policy Documents Thematic Analysis Supported by GPT
6 OpenAI Updates on Policies and Model Capabilities: Implications for Thematic Analysis - A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- III. Vulnerability Assessment
- How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Risk Characteristics of LLMs
III. Impact of Alignment on LLMs’ Risk Preferences - Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
- 2 Theories and Components of Deception
4 DAMAS: A MAS Framework for Deception Analysis - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Findings - Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- 4 Global Regulatory Landscape of AI
- Staying vigilant in the Age of AI: From content generation to content authentication / 2407.00922 / ISBN:https://doi.org/10.48550/arXiv.2407.00922 / Published by ArXiv / on (web) Publishing site
- Emphasizing Reasoning Over Detection
Prospective Usage: Assessing Veracity in Everyday Content - A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 4 Governance audits
- Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
- 4. An ‘ethics-based’ AI audit
7. Limitations - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- F Additional Figures
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- II. Global Divide in AI Regulation: Horizontally. Context-Specific
- With Great Power Comes Great Responsibility: The Role of Software Engineers / 2407.08823 / ISBN:https://doi.org/10.48550/arXiv.2407.08823 / Published by ArXiv / on (web) Publishing site
- 3 Future Research Challenges
- Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 3 Assurance of AI Systems for Specific Functions
- Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- III. Methodology
- AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
- 4 Results
6 Threats to Validity - AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
- III. Methodology
- Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
- 3 Critical Reflection on AI Risks
- Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
- 4 Interviews with Visualization Atlas Creators
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- I. AI and the Federal Arbitration ACt
- CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 4. Experiment Results
- The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
- Findings
Discussion - Dataset | Mindset = Explainable AI | Interpretable AI / 2408.12420 / ISBN:https://doi.org/10.48550/arXiv.2408.12420 / Published by ArXiv / on (web) Publishing site
- 4. Experiment Implementation, Results and Analysis
- Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 7 Challenges and Future Directions
- Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- 4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications - Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- IV. Findings and Resultant Themes
- Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 3 Method
5 Discussion
Appendices - XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
- Appendices
- Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
- V. Results
- Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 4 Results
5 Discussion - The Gradient of Health Data Privacy / 2410.00897 / ISBN:https://doi.org/10.48550/arXiv.2410.00897 / Published by ArXiv / on (web) Publishing site
- 4 Technical Implementation of a Privacy Gradient Model
- Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
- II. Background
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- 5 Examining LLM's Adherence to Design Principles and the Steerability of Value Preferences
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- A. Appendix
- Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
- 4 Experiment
- Trustworthy XAI and Application / 2410.17139 / ISBN:https://doi.org/10.48550/arXiv.2410.17139 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 XAI Vs AI - Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
- 3 Benchmark
Supplementary Materials - Ethical Leadership in the Age of AI Challenges, Opportunities and Framework for Ethical Leadership / 2410.18095 / ISBN:https://doi.org/10.48550/arXiv.2410.18095 / Published by ArXiv / on (web) Publishing site
- Ethical Challenges Presented by AI
- TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- Appendices
- Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
- 4 Conclusion
- Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
- Classical Assessment Validation Theory and Responsible AI
- Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
- Abstract
- I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 3 Method: Semi-structured Interviews
- How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law / 2404.12762 / ISBN:https://doi.org/10.48550/arXiv.2404.12762 / Published by ArXiv / on (web) Publishing site
- 7 Summary
- Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
- Results
- Enhancing Accessibility in Special Libraries: A Study on AI-Powered Assistive Technologies for Patrons with Disabilities / 2411.06970 / ISBN:https://doi.org/10.48550/arXiv.2411.06970 / Published by ArXiv / on (web) Publishing site
- 6. Data Collection Method
- Human-Centered AI Transformation: Exploring Behavioral Dynamics in Software Engineering / 2411.08693 / ISBN:https://doi.org/10.48550/arXiv.2411.08693 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / on (web) Publishing site
- Appendices
- Good intentions, unintended consequences: exploring forecasting harms
/ 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
- 5 Execution
- Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
- 3. Textual Generation Experiment
- Ethics and Artificial Intelligence Adoption / 2412.00330 / ISBN:https://doi.org/10.48550/arXiv.2412.00330 / Published by ArXiv / on (web) Publishing site
- Abstract
V. Analysis and Results - Large Language Models in Politics and Democracy: A Comprehensive Survey / 2412.04498 / ISBN:https://doi.org/10.48550/arXiv.2412.04498 / Published by ArXiv / on (web) Publishing site
- 3. LLM Applications in Politics
- From Principles to Practice: A Deep Dive into AI Ethics and Regulations / 2412.04683 / ISBN:https://doi.org/10.48550/arXiv.2412.04683 / Published by ArXiv / on (web) Publishing site
- II AI Practice and Contextual Integrity
- Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes / 2412.04796 / ISBN:https://doi.org/10.48550/arXiv.2412.04796 / Published by ArXiv / on (web) Publishing site
- Introduction
- Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
- II AI Practice and Contextual Integrity
- Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
- 2 Methods
6 Data availability statement - Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
- 5 Technical Foundations for LLM Applications in Political Science
6 Future Directions & Challenges - On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
- IV. Discussions
- Bots against Bias: Critical Next Steps for Human-Robot Interaction / 2412.12542 / ISBN:https://doi.org/10.1017/9781009386708.023 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
- 4 Clio for safety
- Autonomous Alignment with Human Value on Altruism through Considerate Self-imagination and Theory of Mind / 2501.00320 / ISBN:https://doi.org/10.48550/arXiv.2501.00320 / Published by ArXiv / on (web) Publishing site
- 2 Results
- Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
- V. Discussion and Synthesis
- Curious, Critical Thinker, Empathetic, and Ethically Responsible: Essential Soft Skills for Data Scientists in Software Engineering / 2501.02088 / ISBN:https://doi.org/10.48550/arXiv.2501.02088 / Published by ArXiv / on (web) Publishing site
- III. Method
- Trust and Dependability in Blockchain & AI Based MedIoT Applications: Research Challenges and Future Directions / 2501.02647 / ISBN:https://doi.org/10.48550/arXiv.2501.02647 / Published by ArXiv / on (web) Publishing site
- Med-IoT Applications
- Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / on (web) Publishing site
- 5 Phase 3: Design and Evaluation of the HRI-
Value Compass
- Towards A Litmus Test for Common Sense / 2501.09913 / ISBN:https://doi.org/10.48550/arXiv.2501.09913 / Published by ArXiv / on (web) Publishing site
- 9 Open Challenges and Future Work
- Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv.2501.10453 / Published by ArXiv / on (web) Publishing site
- 2 Results and Discussion
- Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond / 2501.11457 / ISBN:https://doi.org/10.48550/arXiv.2501.11457 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- A Critical Field Guide for Working with Machine Learning Datasets / 2501.15491 / ISBN:https://doi.org/10.48550/arXiv.2501.15491 / Published by ArXiv / on (web) Publishing site
- 1. Introduction to Machine Learning Datasets
3. Parts of a Dataset
4. Types of Datasets
6. The Dataset Lifecycle - Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv.2501.16531 / Published by ArXiv / on (web) Publishing site
- 4. Methods
- A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
/ 2501.18038 / ISBN:https://doi.org/10.48550/arXiv.2501.18038 / Published by ArXiv / on (web) Publishing site
- 5. Mapping overlaps between TELUS innovation and acceleration
ethics in the area of privacy
- Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
- 4 Findings
- DebiasPI: Inference-time Debiasing by Prompt Iteration of a Text-to-Image Generative Model / 2501.18642 / ISBN:https://doi.org/10.48550/arXiv.2501.18642 / Published by ArXiv / on (web) Publishing site
- 4 Experiments and Results
- The Human-AI Handshake Framework: A Bidirectional Approach to Human-AI Collaboration / 2502.01493 / ISBN:https://doi.org/10.48550/arXiv.2502.01493 / Published by ArXiv / on (web) Publishing site
- Literature Review
- Ethical Considerations for the Military Use of Artificial Intelligence in Visual Reconnaissance / 2502.03376 / ISBN:https://doi.org/10.48550/arXiv.2502.03376 / Published by ArXiv / on (web) Publishing site
- 3 Use Case 1 - Decision Support for Maritime Surveillance
4 Use Case 2 - Decision Support for Military Camp Protection
5 Use Case 3 - Land-based Reconnaissance in Inhabited Area - Cognitive AI framework: advances in the simulation of human thought / 2502.04259 / ISBN:https://doi.org/10.48550/arXiv.2502.04259 / Published by ArXiv / on (web) Publishing site
- 3. Data Flow and Process Logic
- Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
- 6 Diffusion Model Safety
- Integrating Generative Artificial Intelligence in ADRD: A Framework for Streamlining Diagnosis and Care in Neurodegenerative Diseases
/ 2502.06842 / ISBN:https://doi.org/10.48550/arXiv.2502.06842 / Published by ArXiv / on (web) Publishing site
- High Quality Data Collection
- Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv.2502.07254 / Published by ArXiv / on (web) Publishing site
- Paper
Conclusion - From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine
/ 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
- Appendices
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 5 Benchmarking Text-to-Image Models
- Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
- 5. Conclusion
- Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence / 2503.00164 / ISBN:https://doi.org/10.48550/arXiv.2503.00164 / Published by ArXiv / on (web) Publishing site
- 4 Agentic AI and Frontier AI in Cybersecu-
rity
- Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
- III. Core Concepts of Visual Language Modeling
- Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
- 2 Background, History and Resources
- Compliance of AI Systems / 2503.05571 / ISBN:https://doi.org/10.48550/arXiv.2503.05571 / Published by ArXiv / on (web) Publishing site
- III. XAI and Legal Compliance
- Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 LLM Hallucinations in Medicine
4 Detection and Evaluation of Medical Hallucinations
5 Mitigation Strategies
8 Survey on AI/LLM Adoption and Medical Hallucinations Among Healthcare Professionals and Researchers
Appendices - Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance / 2503.06411 / ISBN:https://doi.org/10.48550/arXiv.2503.06411 / Published by ArXiv / on (web) Publishing site
- 3 Applying Systems Thinking
- AI Governance InternationaL Evaluation Index (AGILE Index)
/ 2502.15859 / ISBN:https://doi.org/10.48550/arXiv.2502.15859 / Published by ArXiv / on (web) Publishing site
- 5. Appendix
- MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
- 3 Case Study
- LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions
/ 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
- 2 Methodology
4 Discussion
Appendices - Advancing Human-Machine Teaming: Concepts, Challenges, and Applications
/ 2503.16518 / ISBN:https://doi.org/10.48550/arXiv.2503.16518 / Published by ArXiv / on (web) Publishing site
- 3 Empirical Studies to Promote Team Performance
- Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
- 2 Materials and methods
3 Results - AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use / 2503.20099 / ISBN:https://doi.org/10.48550/arXiv.2503.20099 / Published by ArXiv / on (web) Publishing site
- Literature Review
Discussion - Generative AI and News Consumption: Design Fictions and Critical Analysis / 2503.20391 / ISBN:https://doi.org/10.48550/arXiv.2503.20391 / Published by ArXiv / on (web) Publishing site
- 4 Results
- AI Family Integration Index (AFII): Benchmarking a New Global Readiness for AI as Family / 2503.22772 / ISBN:https://doi.org/10.48550/arXiv.2503.22772 / Published by ArXiv / on (web) Publishing site
- 6. Discussions
- BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models
/ 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
- 4 Limitations
- Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
/ 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Methodology
6. Conclusion - Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
- 4 Legal cases as Use Cases for Attribution Methods
- Systematic Literature Review: Explainable AI Definitions and Challenges in Education / 2504.02910 / ISBN:https://doi.org/10.48550/arXiv.2504.02910 / Published by ArXiv / on (web) Publishing site
- Methodology
- Automating the Path: An R&D Agenda for Human-Centered AI and Visualization / 2504.07529 / ISBN:https://doi.org/10.48550/arXiv.2504.07529 / Published by ArXiv / on (web) Publishing site
- Explore
- Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
- 3 Why current evaluations approaches are insufficient for assessing
interaction harms
- An Empirical Study on Decision-Making Aspects in Responsible Software Engineering for AI / 2501.15691 / ISBN:https://doi.org/10.48550/arXiv.2501.15691 / Published by ArXiv / on (web) Publishing site
- III. Research Methodology
IV. Results and Analysis - >Publishing site
- What is Being Evaluated?
- Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Conceptual Underpinnings of Confirmation Bias
3. Confirmation Bias in Generative AI Chatbots
4. Mechanisms of Confirmation Bias in Chatbot Architectures
5. Risks and Ethical Implications
6. Mitigation Strategies
7. Future Research Directions
8. Conclusion - Designing AI-Enabled Countermeasures to Cognitive Warfare / 2504.11486 / ISBN:https://doi.org/10.48550/arXiv.2504.11486 / Published by ArXiv / on (web) Publishing site
- 2.0 Cognitive Warfare in Practice
- Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions
/ 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / on (web) Publishing site
- Appendix
- Approaches to Responsible Governance of GenAI in Organizations / 2504.17044 / ISBN:https://doi.org/10.48550/arXiv.2504.17044 / Published by ArXiv / on (web) Publishing site
- IV. Solutions to Address Concerns
- Auditing the Ethical Logic of Generative AI Models / 2504.17544 / ISBN:https://doi.org/10.48550/arXiv.2504.17544 / Published by ArXiv / on (web) Publishing site
- Seven Contemporary LLMs
- AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Result
5 Discussion - A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption / 2504.19179 / ISBN:https://doi.org/10.48550/arXiv.2504.19179 / Published by ArXiv / on (web) Publishing site
- 4. Design framework for medical AI systems
5. Tradeoffs between TAI principles and requirements - TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language Models / 2504.20605 / ISBN:https://doi.org/10.48550/arXiv.2504.20605 / Published by ArXiv / on (web) Publishing site
- 3 LLM Evaluation and Comparison with Related Work
- Federated learning, ethics, and the double black box problem in medical AI
/ 2504.20656 / ISBN:https://doi.org/10.48550/arXiv.2504.20656 / Published by ArXiv / on (web) Publishing site
- 5 The double black box problem
- Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation / 2504.21574 / ISBN:https://doi.org/10.48550/arXiv.2504.21574 / Published by ArXiv / on (web) Publishing site
- 2. Adoption and Applications of Genertive AI in Financial Services
- Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
- IV. Applications and Approaches
- Ethical AI in the Healthcare Sector: Investigating Key Drivers of Adoption through the Multi-Dimensional Ethical AI Adoption Model (MEAAM) / 2505.02062 / ISBN:https://doi.org/10.9734/ajmah/2025/v23i51228 / Published by ArXiv / on (web) Publishing site
- 3. Research Methods
5. Conclusion
Appendix - GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions / 2505.05523 / ISBN:https://doi.org/10.48550/arXiv.2505.05523 / Published by ArXiv / on (web) Publishing site
- 4. Findings
- AI and Generative AI Transforming Disaster Management: A Survey of Damage Assessment and Response Techniques / 2505.08202 / ISBN:https://doi.org/10.48550/arXiv.2505.08202 / Published by ArXiv / on (web) Publishing site
- II Domain-Specific Literature Review
- Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / on (web) Publishing site
- IV Persuasive Procedures in Generative AI
- WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models / 2505.09595 / ISBN:https://doi.org/10.48550/arXiv.2505.09595 / Published by ArXiv / on (web) Publishing site
- 6 Discussion and Potential Limitations
- Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks / 2505.13565 / ISBN:https://doi.org/10.48550/arXiv.2505.13565 / Published by ArXiv / on (web) Publishing site
- 4 Risk taxonomy: risks posed by AI to democracy
5 Trustworthy AI requirements for AI risk mitigation - AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals / 2505.15365 / ISBN:https://doi.org/10.48550/arXiv.2505.15365 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies / 2505.15851 / ISBN:https://doi.org/10.48550/arXiv.2505.15851 / Published by ArXiv / on (web) Publishing site
- 2 Moral Exercises in the Context of Human Oversight
3 Pilot Studies
4 Discussion
Appendix - Advancing the Scientific Method with Large Language Models: From Hypothesis to Discovery / 2505.16477 / ISBN:https://doi.org/10.48550/arXiv.2505.16477 / Published by ArXiv / on (web) Publishing site
- Toward Large Language Models as Creative Engines for Fundamental
Science
- Cultural Value Alignment in Large Language Models: A Prompt-based Analysis of Schwartz Values in Gemini, ChatGPT, and DeepSeek / 2505.17112 / ISBN:https://doi.org/10.48550/arXiv.2505.17112 / Published by ArXiv / on (web) Publishing site
- Introduction
- TEDI: Trustworthy and Ethical Dataset Indicators to Analyze and Compare Dataset Documentation / 2505.17841 / ISBN:https://doi.org/10.48550/arXiv.2505.17841 / Published by ArXiv / on (web) Publishing site
- 2 Trustworthy and Ethical Dataset Indicators (TEDI)
- Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods / 2505.17870 / ISBN:https://doi.org/10.48550/arXiv.2505.17870 / Published by ArXiv / on (web) Publishing site
- 2 Conceptual Framework: Model Immunization via Quarantined Falsehoods
- Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI
/ 2505.20304 / ISBN:https://doi.org/10.48550/arXiv.2505.20304 / Published by ArXiv / on (web) Publishing site
- 2 Opacity as a Frontier of Ethical Design
- SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM Agents / 2505.23559 / ISBN:https://doi.org/10.48550/arXiv.2505.23559 / Published by ArXiv / on (web) Publishing site
- Abstract
- Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side / 2505.23733 / ISBN:https://doi.org/10.48550/arXiv.2505.23733 / Published by ArXiv / on (web) Publishing site
- 5 Data Analysis and Results
- Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking / 2505.23930 / ISBN:https://doi.org/10.48550/arXiv.2505.23930 / Published by ArXiv / on (web) Publishing site
- Foundational cognitive processes
Discussion - Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products / 2506.00080 / ISBN:https://doi.org/10.48550/arXiv.2506.00080 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation / 2506.02992 / ISBN:https://doi.org/10.48550/arXiv.2506.02992 / Published by ArXiv / on (web) Publishing site
- 5 Results and Analysis
Appendix - HADA: Human-AI Agent Decision Alignment Architecture / 2506.04253 / ISBN:https://doi.org/10.48550/arXiv.2506.04253 / Published by ArXiv / on (web) Publishing site
- 4 Demonstration
5 Evaluation