if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: inconsistencies
Bibliography items where occurs: 41
- Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society / 2001.04335 / ISBN:https://doi.org/10.48550/arXiv.2001.04335 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- 4 Centralized regulation in the US
context
- A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- IV. TRAIN AND USE LLM FOR HEALTHCARE
- Ensuring Trustworthy Medical Artificial Intelligence through Ethical and Philosophical Principles / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- Ethical datasets and algorithm development guidelines
Ethical guidelines for medical AI model deployment - The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
- V. Technical defense mechanisms
- RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Methodology - Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- 8. Future directions and emerging trends
- Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- Background and significance
Materials and methods - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 3 Detection
4 Tools - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- Abstract
V. Processual Elements
VI. Human Dynamics
Appendix B Examples of Benchmark Inadequacies in Processual Elements
Appendix C Examples of Benchmark Inadequacies in Human Dynamics - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
- 1 The increasing importance of AI
- Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 2 Background
- FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 4. Results
- Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Introduction
Methodology - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Learning from Feedback - Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 4 LLM Lifecycle
References - A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- 5. Recommendations for Advancing Open
Data in Generative AI
- Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- Impact Statement
- Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- Appendix C: Z. Sayre to F. S. Fitzgerald w/ Mixed Emotions
- Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- 4 Validation Using Topic Modeling
- The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
- 4 The Narrow Depth of Industry’s Responsible AI Research
- Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
- 2. A Case Study on DAIC-WoZ Depression Research
- Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
- 4 DAMAS: A MAS Framework for Deception Analysis
- Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- 7 Future Directions
- Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
- III. Analysis
Appendix B Legal aspects - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health / 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
- III. CASE STUDIES : APPLICATIONS OF LLM S IN PATIENT
ENGAGEMENT
- AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- 3 Limitations of RLxF
- Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Deepfake Detection
4. Passive Deepfake Authentication Methods
References - Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
- 4 ESG-AI framework
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- References
- Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
- 1 Preliminary notes
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- 8 Model Evaluation
- Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
- 4 Interviews with Visualization Atlas Creators
- Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- II. Neuro-Symbolic AI
- Conference Submission and Review Policies to Foster Responsible Computing Research / 2408.09678 / ISBN:https://doi.org/10.48550/arXiv.2408.09678 / Published by ArXiv / on (web) Publishing site
- Accurate Reporting and Reproducibility
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- 1. Resistance Against AI Does Not Offer Conclusive Reasons for Outright
Rejection
- Preliminary Insights on Industry Practices for Addressing Fairness Debt / 2409.02432 / ISBN:https://doi.org/10.48550/arXiv.2409.02432 / Published by ArXiv / on (web) Publishing site
- 4 Findings
6 Conclusions