if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: types
Bibliography items where occurs: 471
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Report highlights
Chapter 1 Reseach and Development
Chapter 2 Technical Performance
Chapter 3 Technical AI Ethics
Chapter 5 AI Policy and Governance
Appendix - Ethics of AI: A Systematic Literature Review of Principles and Challenges / 2109.07906 / ISBN:https://doi.org/10.48550/arXiv.2109.07906 / Published by ArXiv / on (web) Publishing site
- 4 Reporting the review
- AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
- 4 Results
- The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
- 5 Evaluation of Ethical Principle Implementations
- A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
- 2. Defining ethical AI
Conclusion - Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Methodology
4 Results
6 Conclusion - Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society / 2001.04335 / ISBN:https://doi.org/10.48550/arXiv.2001.04335 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Near-Long
4 A Clearer Account of Research Priorities and Disagreements - ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
- 3 The ESR Process
4 Deployment and Evaluation - What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Methodology
Appendix A supplementary material - From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts / 2307.15452 / ISBN:https://doi.org/10.48550/arXiv.2307.15452 / Published by ArXiv / on (web) Publishing site
- Abstract
2. Method
3. Results - The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Theory
3. Methodology
5. Future Directions for Research, Practice, & Policy
6. Conclusion - Perceptions of the Fourth Industrial Revolution and Artificial Intelligence Impact on Society / 2308.02030 / ISBN:https://doi.org/10.48550/arXiv.2308.02030 / Published by ArXiv / on (web) Publishing site
- Literature Review
- Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Clarifying Terminologies of Article-5: Insights from Behavioral Economics and Psychology - From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
- Introduction
- Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Bias and Discrimination of Training Data
- Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Taxonomy of ethical principles
A Methodology - Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
- Introduction
- A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
5 Falsification and Evaluation
7 Runtime Monitor - Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust / 2308.09979 / ISBN:https://doi.org/10.48550/arXiv.2308.09979 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
- 3 Targeted data augmentation
4 Experiments - Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- 2 Related Work on Data Excellence
3 Reliability and Reproducibility Metrics for Responsible Data Collection
4 Published Annotation Tasks and Datasets - Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- III. Comprehensive review of state-of-the-art LLMs
IV. Applied and technology implications for LLMs - Artificial Intelligence in Career Counseling: A Test Case with ResumAI / 2308.14301 / ISBN:https://doi.org/10.48550/arXiv.2308.14301 / Published by ArXiv / on (web) Publishing site
- 4 Results and discussion
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experiment - The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
- 2 Key AI technology in financial services
3 Benefits of AI use in the finance sector
6 Regulation of AI and regulating through AI - Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- 3 Bias and fairness
6 Way forward - The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- 3. ChatGPT Training Process
- Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 2 Art Data and Human–Machine Interaction in Art Creation
Part 2 - 1 Biometric Signal Sensing Technologies and Emotion Data
Part 2 - 2 Motion Caputer Technologies and Motion Data
Part 2 - 3 Photogrammetry / Volumetric Capture
Part 3 - 2 Machine Artist Models
Part 3 - 4 Demonstration of the Proposed Framework
Part 5 - 2 Algorithmics Bias in Art Generation
Part 5 - 3 Democratization of Art with new Technologies - FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 4. Traceability - For Transparent and Dynamic AI in Medical Imaging
6. Robustness - For Reliable AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- 4 Experiments
Cambridge Law Corpus: Datasheet - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- IV. Attack Surfaces
- Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work
3 Method
4 Taxonomy of AI Privacy Risks
5 Discussion
6 Conclusion - The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 3 Analysis and Findings
B Pre-class Questionnaire (Verbatim) - A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 3. AI Ethical Principles
4. Implementing the Practical Use of Ethical AI Applications - A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. What LLMs can do for healthcare? from fundamental tasks to advanced applications
3. From PLMs to LLMs for healthcare
4. Usage and data for healthcare LLM - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 The applications of STREAM - Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Regulation: A Short Introduction
5 Regulation and NLP (RegNLP): A New Field - Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 3. Ethics of AI and Robotics
4. Systematic Review and Scientometric Analysis
5. Ethical Issues of AI and Robotics in AEC Industry
7. Future Research Direction - Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
- 2. The practice of multilateral negotiation and the mechanisms of compromises
5. Text negotiations as normative testing - Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
- 4. Technical Risks
- Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 2. Autonomous vehicles
5. Cybersecurity Risks
6. Risk management
7. Issues - The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. AI Ethics
4. A Holistic Framework
5. Discussion - Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
- Different Types of Trust
Trust and AI Ethics Principles - AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
9. Challenges of AI and Blockchain in Teaching and Learning - Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 4. Nascent Extant Work that Falls Within the Ethics of AI Belief
- A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- 3 Ethical datasets and algorithm development guidelines
- Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- 2 Methodology
3 Governance Patterns
4 Process Patterns - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- FUTURE-AI GUIDELINE
DISCUSSION - Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Agent Design
3 Agent Benchmark
4 Agent Performance
5 Related Work
6 Conclusion and Future Work
Appendix B Experiment Details - Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Generalization from a Simple Good for Humanity Principle
I Responses on Prompts from PALMS, LaMDA, and InstructGPT - The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Findings - Systematic AI Approach for AGI:
Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
- 5 System Design for AI Alignment
- AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- 3 Arrow-Sen Impossibility Theorems for RLHF
- A Comprehensive Review of
AI-enabled Unmanned Aerial Vehicle:
Trends, Vision , and Challenges / 2310.16360 / ISBN:https://doi.org/10.48550/arXiv.2310.16360 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. UAV Platform Type
IV. Artificial Intelligence Embedded UAV - Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- 2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen - Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- AI for Open Science: A Multi-Agent Perspective for
Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 A Formal Language of AI for Open Science - Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a
focus group report / 2311.00903 / ISBN:https://doi.org/10.48550/arXiv.2311.00903 / Published by ArXiv / on (web) Publishing site
- Communication skills in cybersecurity and ethics
- LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
- 2 A General Theory of Meaning
- Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and
Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
- Literature Review
Research questions - Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 4 Deontological AI Alignment
- Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- 4 Applications of ChatGPT in real-world scenarios
6 Limitations and potential challenges
10 Future directions for ChatGPT in vision domain - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Sources of bias in AI
III. Impacts of bias in AI
IV. Mitigation strategies for bias in AI
V. Fairness in AI
VI. Mitigation strategies for fairness in AI - A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- X. 2020-2021: the rise of LLMS
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- 2 Related work
3 Method
4 Findings
5 Discussion - She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control / 2311.08943 / ISBN:https://doi.org/10.48550/arXiv.2311.08943 / Published by ArXiv / on (web) Publishing site
- V. Ethics
VI. Conclusion - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Methodology - Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Chatbot approaches overview: Taxonomy of existing methods - Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Discussion - First, Do No Harm:
Algorithms, AI, and Digital Product Liability
Managing Algorithmic Harms Though Liability Law and Market Incentives / 2311.10861 / ISBN:https://doi.org/10.48550/arXiv.2311.10861 / Published by ArXiv / on (web) Publishing site
- Appendix B – Common AI Harms as
Described by EPIC10
- Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- 2 Proposed Process
- Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
- 3 Methods
4 Findings - Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- 2 Background
4 Findings
A Overview of AIIA Instruments
B Study Materials - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Approach: capturing and representing heuristics behind GPT's decision-making process - Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
- Abstract
Suggestions for More Meaningful Engagement with the Impact of RAI Research
Concluding Reflections - Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- IV. LLM-empowered education
- The Rise of Creative Machines: Exploring the Impact of Generative AI / 2311.13262 / ISBN:https://doi.org/10.48550/arXiv.2311.13262 / Published by ArXiv / on (web) Publishing site
- IV. Risks of generative AI
- Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Works
3 Methodology
4 Results and Discussion - Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Research Method
Results - Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Overview of Societal Biases in GAI Models
Methodology
Findings
Discussion
Conclusion - RAISE -- Radiology AI Safety, an End-to-end lifecycle approach / 2311.14570 / ISBN:https://doi.org/10.48550/arXiv.2311.14570 / Published by ArXiv / on (web) Publishing site
- 3. Production deployment monitoring phase
- Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 4. Addressing bias, transparency, and accountability
- From deepfake to deep useful: risks and opportunities through a systematic literature review / 2311.15809 / ISBN:https://doi.org/10.48550/arXiv.2311.15809 / Published by ArXiv / on (web) Publishing site
- 2. Material and methods
4. Discussion - Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 3 Transparency and explainability
4 Fairness and equity
5 Responsiblity, accountability, and regulations - Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
- 3 Mapping Challenges throughout the Data Lifecycle
- Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms / 2312.04839 / ISBN:https://doi.org/10.48550/arXiv.2312.04839 / Published by ArXiv / on (web) Publishing site
- 3 Results
- Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- Abstract
4. Method
5. Results
6. Discussion - Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Human intelligence - RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
- 2 Related work
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 2 Risks of Misuse for Artificial Intelligence in
Science
3 Control the Risks of AI Models in Science
5 Discussion
Appendix A Assessing the Risks of AI Misuse in Scientific Research
Appendix B Details of Risks Demonstration in Chemical Science
Appendix C Detailed Implementation of SciGuard - Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- The concept of multiculturalism and its importance
Culturally responsive AI – current landscape - Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- IV. Results
Appendix A – Survey Questionnaire
Appendix B – Interview Questionnaire - Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- C Full survey questions
- Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems / 2312.12559 / ISBN:https://doi.org/10.48550/arXiv.2312.12559 / Published by ArXiv / on (web) Publishing site
- 2 The Reign of Algorithmic Fairness
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- 3 Problem Formulation
- Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- II. Theoretical background and hypotheses
- Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Methods
- Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 5. LLMs in social and cultural psychology
7. Challenges and future directions - Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 3. The usage of synthetic data
- MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
III. Methodology: model development
IV. System design
V. Evaluation
VI. Discussion and future work
VII. Conclusion - AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- IV. Results
- Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- Abstract
Background and significance
Objective
Materials and methods
Results
Discussion - Resolving Ethics Trade-offs in Implementing Responsible AI / 2401.08103 / ISBN:https://doi.org/10.48550/arXiv.2401.08103 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Approaches for Resolving Trade-offs
III. Discussion and Recommendations
IV. Concluding Remarks - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- Abstract
Contents / List of figures / List of tables / Acronyms
1 Introduction
I Understanding bias - 2 Bias and moral framework in AI-based decision making
3 Bias on demand: a framework for generating synthetic data with bias
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
8 Fairview: an evaluative AI support for addressing fairness
IV Conclusions - Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background
4 Results
6 Conclusion - FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Data Management Challenges in Large Language Models
4 Framework for FAIR Data Principles Integration in LLM Development - Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
- 3. Use cases representing different image data types and their challenges
and status for sharing
- Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 1 The “Triple-Too” problem of AI ethics
2 A shift to user-centered realism in scientific contexts
3 Five specific goals and action-guiding strategies for ethical AI use in research practices - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- 2 Related work
3 Methods
4 RAI tool evaluation practices - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Generation
3 Detection
4 Tools and Evaluation Metrics
5 Discussion - Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
- 3 Beyond technical dimensions
- Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 4. Findings
- Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 2 Related work and our approach
3 Methods: case-based expert deliberation
4 Results
C Linear regression of participants' AI usage and desired responses - POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- Abstract
5 POLARIS framework application - Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- 2. Background
5. The framework in practice - Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
- 5 Findings
- Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Discussion - I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Awareness in LLMs
4 Awareness Dataset: AWAREEVAL
5 Experiments
Limitation
A AWAREEVAL Dataset Details - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- 3 Results
- Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
- 2 Emerging Management-based
AI Regulation
4 Techniques of Human-Guided Training - User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Current Taxonomy - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- V. Processual Elements
- Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 3 The AIGC Copyright Dilemma: A What-if
Analysis
- Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- 4. Robustness of Free-Formed AI Collectives
Against Risks
5. Open Challenges for Free-Formed AI Collectives - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- 3 Model of Civilization Evolution
5 Experiment Design - The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
- Introduction
Results
METRIC-framework for medical training data - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
- 3 There is no reliable AI regulation without a
sound theory of human-AI interaction
- Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
VI. AI and Learning Algorithms Statistics for Autonomous Vehicles - FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 2. Background
3. Methods
4. Results - Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 Methodology
5 Discussion - The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Part 1. Study design
Part 5. Interpretability of generative models - Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- IV. Challenges and Considerations
- A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- 4 Safe, Secure and Trustworthy AI
- How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- B Baseline Setup
- AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
- 4. Ethical Issues and Concerns
- Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- 4. Methods
- Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
- 3. Analysis
- Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Informational Fairness - Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
- 3 Research Methodology
4 Results of the Systematic Literature Review - Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- Appendix A GPT3.5 and GPT4 OCO-scripting
- Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Findings - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Methodology
Results
Conclusion - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- A Appendix
- Analyzing Potential Solutions Involving Regulation to Escape Some of AI's Ethical Concerns / 2403.15507 / ISBN:https://doi.org/10.48550/arXiv.2403.15507 / Published by ArXiv / on (web) Publishing site
- Introduction
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Survey
3 Conceptualizing Fairness and Bias in ML
5 Ways to mitigate bias and promote Fairness
8 Conclusion - AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
- 5. Human Oversight
6. Large Language Models (LLMs) - Introduction - Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- 4 Legal Problems of Large Languge Models
- A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- 5 Vision Models and Multi-Modal Large Language Models
- Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- 3 Method
5 Discussion - Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
- III. Proposed Design: IBIS
IV. Detailed Construction - Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- Polarised Responses
- A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
- 4 Critical Survey
5 Three Patterns of Critique
A Methodologies of Surveyed Literature - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift
4 Assurance - Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations / 2401.13605 / ISBN:https://doi.org/10.48550/arXiv.2401.13605 / Published by ArXiv / on (web) Publishing site
- 7 Discussion
- Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Benefits and Risks of Generative Ghost
5 Discussion - On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
- Ethical considera5ons
- PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work
3 Methodology - Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
- 3 The Robots at Issue
8 The Troubling Implications of Legal Rationales for Robot Rights - Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background & Related Work
3 Scoping Review of Design Patterns, Affordances, and Harms in AI Interfaces
4 DECAI: Design-Enhanced Control of AI Systems - Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- 5 Limitations
- Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 4 LLM Lifecycle
- The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Audit the process, not just the product
3 3 Governance for safety - From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap / 2404.13131 / ISBN:https://doi.org/10.1145/3630106.3658951 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Disentangling Replicability of Model Performance Claiim and Replicability of Social Claim
3 How Claim Replicability Helps Bridge the Responsiblity Gap - Designing Safe and Engaging AI Experiences for Children: Towards the Definition of Best Practices in UI/UX Design / 2404.14218 / ISBN:https://doi.org/10.48550/arXiv.2404.14218 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance / 2404.14660 / ISBN:https://doi.org/10.48550/arXiv.2404.14660 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Technical assessments require an AI expert to complete — and we don’t have enough experts
2 Procurement Loopholes Exist - Who Followed the Blueprint? Analyzing the Responses of U.S. Federal Agencies to the Blueprint for an AI Bill of Rights / 2404.19076 / ISBN:https://doi.org/10.48550/arXiv.2404.19076 / Published by ArXiv / on (web) Publishing site
- Conclusion
- Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
- II. Discrimination in Law
- War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
- 5 Conclusions
- Towards an Ethical and Inclusive Implementation of Artificial Intelligence in Organizations: A Multidimensional Framework / 2405.01697 / ISBN:https://doi.org/10.48550/arXiv.2405.01697 / Published by ArXiv / on (web) Publishing site
- 2 How can organizations participate
- A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 4 Medicine and Healthcare
5 Law
6 Ethics - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
- 2. Current State of AWS
- Responsible AI: Portraits with Intelligent Bibliometrics / 2405.02846 / ISBN:https://doi.org/10.48550/arXiv.2405.02846 / Published by ArXiv / on (web) Publishing site
- III. Data and Methodology
IV. Bibliometric Portraits of Responsible AI - A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- Glossary of Terms
1. Introduction
3. A Spectrum of Scenarios of Open Data for Generative AI - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- 4 Results
5 Discussion and conclusions - Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Method for Generating Responsible AI Guidelines - XXAI: Towards eXplicitly eXplainable Artificial Intelligence / 2401.03093 / ISBN:https://doi.org/10.48550/arXiv.2401.03093 / Published by ArXiv / on (web) Publishing site
- 1.
Introduction
4. Discussion of the problems of symbolic AI and ways to overcome them - Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
- Social-interactional harms
- Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
- 2 Between Human Intelligence and Technology: AGI’s Dual Value-Laden Pedigrees
3 The Motley Choices of AGI Discourse - Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Research Approach
6 Taxonomy of Harms
7 Discussion
A Appendix - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- Abstract
4. Experiments - Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- 2. Background
5. What Is the Format of Human Feedback? - A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- 2 Materials
3 Results
4 Discussion - Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2.MT Critical Errors
4.Error Analysis
5.Quality Metrics Performance - The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
- 4 The Narrow Depth of Industry’s Responsible AI Research
- Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
- 5 Lessons Learned from the Pilots
- A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- IV. Network Security
VI. Awareness - Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
- Main
Results
Methods in clinical AI fairness research
Methods
Additional material - The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
- 2. Anticipated AI Use for
Children
3. Discussion - Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- 5. Interview Results: Opportunities and
Concerns of Using LLMs in the Frontline
- Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 2 Mitigating (Unfair) Bias
3 Secure AI in EO: Focusing on Defense Mechanisms, Uncertainty Modeling and Explainability
4 Geo-Privacy and Privacy-preserving Measures
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO - Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Framework
4 Discussion
5 Final Remarks
A DVC Dataset: Domestic Violence Cases
C Biases - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 4 Comparative Analysis of Pre-Trained Models.
- How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- I. Description of Method/Empirical Design
III. Impact of Alignment on LLMs’ Risk Preferences - Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
- 3. Related Work
4. Desiderata
5. Methodology
6. Discussion - MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- 3 Benchmark and Method
- Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related Work
3. Bias Evaluation - Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
- 2 Theories and Components of Deception
3 Reductionism & Previous Research in Deceptive AI
4 DAMAS: A MAS Framework for Deception Analysis - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Background
4 Findings
5 Discussion - The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Towards Ethical Mitigation: A Proposed Methodology
6 Ethical Response to LLM Attacks - Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
- III. Federated LLMs for Smarm Intelligence
IV. Learned Lessons and Open Challenges - Justice in Healthcare Artificial Intelligence in Africa / 2406.10653 / ISBN:https://doi.org/10.48550/arXiv.2406.10653 / Published by ArXiv / on (web) Publishing site
- 2. Bridging the Justice Gap
3. Ensuring Equitable Access to AI Technologies
7. Addressing Bias and Enforcing Fairness - Conversational Agents as Catalysts for Critical Thinking: Challenging Design Fixation in Group Design / 2406.11125 / ISBN:https://doi.org/10.48550/arXiv.2406.11125 / Published by ArXiv / on (web) Publishing site
- 6 POTENTIAL DESIGN CONSIDERATIONS
- Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- 2 Large Language Model Risks
3 Strategies in Securing Large Language models - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health
/ 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
- III. CASE STUDIES : APPLICATIONS OF LLM S IN PATIENT
ENGAGEMENT
- Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
- 1 INTRODUCTION
3 METHODOLOGY AND STUDY DESIGN
4 RESULTS - A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
- III. ATTACKS ON DT-INTEGRATED AI ROBOTS
IV. DT-INTEGRATED ROBOTICS DESIGN CONSIDERATIONS AND DISCUSSION - Staying vigilant in the Age of AI: From content generation to content authentication / 2407.00922 / ISBN:https://doi.org/10.48550/arXiv.2407.00922 / Published by ArXiv / on (web) Publishing site
- Introduction
- Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
- 6. Conclusion
- A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Why audit generative AI systems?
3 How to audit generative AI systems?
6 Application audits
7 Clarifications and limitations
8 Conclusion - Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
- 4. An ‘ethics-based’ AI audit
5. Methodology: An industry case study
6. Lessons learned from AstraZeneca’s 2021 AI audit - Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Auditing of AI’s multidisciplinary foundations - Why should we ever automate moral decision making? / 2407.07671 / ISBN:https://doi.org/10.48550/arXiv.2407.07671 / Published by ArXiv / on (web) Publishing site
- 2 Reasons for automated moral decision making
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Global Divide in AI Regulation: Horizontally. Context-Specific
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework - CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Design Framework - Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
- III. Method
IV. Evolution of Affective Robots for Well-Being
V. 10 Years of Affectivbe Robotics - Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Proposed Approach to Determining High-Consequence Biological Capabilities of Concern
Next Steps for AI Biosecurity Evaluations - Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
- 2. Key Challenges of Artificial Data
Appendices - Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Honest Computing reference specifications
4. Discussion - RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- IV. Results
VI. Discussion - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Lens: Expanding Views
on Algorithmic Risks and Harms
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models
5 Discussion: Grappling with the Scale and Interconnectedness of Foundation Models
A Appendix - Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
A Example Storyboards - Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
- 4 ESG-AI framework
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
- 3. Proposed framework
6. Results - AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
- II. Related Work
- Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
- IV. The Path Ahead
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- 3 Data Sources
6 Model Training
8 Model Evaluation - Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- II. Generative AI
- Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
- 2 The Numbers of the Future
- Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Visualization Atlas Design Patterns
4 Interviews with Visualization Atlas Creators
6 Key Characteristics of Visualization Atlases
7 Discussion - Conference Submission and Review Policies to Foster Responsible Computing Research / 2408.09678 / ISBN:https://doi.org/10.48550/arXiv.2408.09678 / Published by ArXiv / on (web) Publishing site
- Executive Summary
Accurate Reporting and Reproducibility
Financial Conflicts of Interest - Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- Introduction
I. AI and the Federal Arbitration ACt - The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
- Methods
Findings - Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 3 European Union AI Act: a brief overview
5 Overall Ethical Requirements (O)
6 Fairness (F)
8 Safety and Robustness (SR)
9 Sustainability (SU)
11 Truthfulness (TR) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- II. Related Work
IV. Attack Methodology - Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Preliminaries
3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
7 Challenges and Future Directions
Appendix - Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Methodology
6 A Categorisation of XAI in Terms of Explanatory Goals
7 Case Studies: Closed-Loop and Semi-Closed-Loop Control
8 Instructions for Use & Discussion of Findings - What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- Three Empathic AI Use Cases in Medicine
“Fine cuts” of Empathy: Capabilities and Distinctions under the Empathy Umbrella
What Empathic Capabilities Do AIs Need?
Implications for AI Creators and Users - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications
8 Conclusion and Final Remarks - A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field - Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- 2. The Experimentation Bottleneck
3. How GenAI Could Make a Difference
5. Annoyances or Dealbreakers? - AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
- Results
- Preliminary Insights on Industry Practices for Addressing Fairness Debt / 2409.02432 / ISBN:https://doi.org/10.48550/arXiv.2409.02432 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Fairness Debt
4 Findings
6 Conclusions - DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
- 6 Results
9 Conclusion & Future Work - Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
- 5 Physics-aware machine learning
- LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
- 4 Hate Classifier Model
- Why business adoption of quantum and AI technology must be ethical / 2312.10081 / ISBN:https://doi.org/10.48550/arXiv.2312.10081 / Published by ArXiv / on (web) Publishing site
- Argument from a holistic and humanistic perspective
- Views on AI aren't binary -- they're plural / 2312.14230 / ISBN:https://doi.org/10.48550/arXiv.2312.14230 / Published by ArXiv / on (web) Publishing site
- The false binary: The caricature
The false binary: A note on language
The complex reality: Complication: There are more than two camps
Overcoming the dichotomy: Why should we? - Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
- 3 Foundation Models in Healthcare
4 Multi-Modal Data Fusion
5 Data Quantity
6 Data Annotation
8 Performance Evaluation
A Healthcare Data Modalities - Ethical Artificial Intelligence Principles and Guidelines for the Governance and Utilization of Highly Advanced Large Language Models / 2401.10745 / ISBN:https://doi.org/10.48550/arXiv.2401.10745 / Published by ArXiv / on (web) Publishing site
- Advanced Large Language Models Governance Using AI Ethics
- Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Hate Speech
3 Methodology - Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Results - Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Language models as human participants
Six fallacies that misinterpret language models
Using language models to simulate roles and model cognitive processes - Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- I. Introduction
IV. Findings and Resultant Themes - How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
- 3 Research Design
4 Results
5 Open Challenges and Future Research Directions (RQ5)
6 Discussions
7 Threats to Validity
8 Conclusion - Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
- Introduction
2 Methodology
4 Results of Additional Analysis
5 Discussion - Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 4 Results
- ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 ValueCompass Framework - Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 2 Related Research
5 Discussion - Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations / 2409.13869 / ISBN:https://doi.org/10.48550/arXiv.2409.13869 / Published by ArXiv / on (web) Publishing site
- Mutual Impacts: Technology and Democracy
Stereotypes
Middle-aged and elders’ representation
Conclusion - GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
7 Discussion - XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experiments
Appendices - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
- II. Views on Intelligence
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
- 4 Characteristics of Publications
5 Aims & Objectives (RQ1) - Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
- 2 Inherent problems of AI related to medicine
4 AI safety issues related to large language models in medicine - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 3 Methods
4 Results - The Gradient of Health Data Privacy / 2410.00897 / ISBN:https://doi.org/10.48550/arXiv.2410.00897 / Published by ArXiv / on (web) Publishing site
- 3 The Health Data Privacy Gradient
4 Technical Implementation of a Privacy Gradient Model
5 Legal and Ethical Implications
6 Case Studies
7 Policy Implications and Recommendations - Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
- II. Background
IV. Results
V. Discussion
APPENDIX C PUBLICATION TRENDS - AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- 3 AI Press System
4 Experimental Setup - Investigating Labeler Bias in Face Annotation for Machine Learning / 2301.09902 / ISBN:https://doi.org/10.48550/arXiv.2301.09902 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
4. Results
5. Discussion
6. Conclusion - The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
- IV. Results
V. Discussion - Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems / 2410.10284 / ISBN:https://doi.org/10.48550/arXiv.2410.10284 / Published by ArXiv / on (web) Publishing site
- III. Research Methodology
IV. Challenges of AWS - Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models
/ 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Previous studies
3 Overview of cultural safety
7 Cultural safeguarding - How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
- Executive Summary
1 Introduction
2 Defining “Regulatory Capture”
3 Methods
4 Outcomes of Regulatory Capture in US AI Policy
6 Mitigating or Preventing Regulatory Capture in AI Policy
7 Limitations
Appendices - Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
- 2 Ethics of Resisting LLM Inference
7 Conclusion and Limitations - Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
- 4 Analysis
- Confrontation or Acceptance: Understanding Novice Visual Artists' Perception towards AI-assisted Art Creation / 2410.14925 / ISBN:https://doi.org/10.48550/arXiv.2410.14925 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
5 RQ1: Evolution of the Opinions Towards AI Tools
7 RQ3: The Stakeholder's Opinions Towards AI Tools - Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
- I. Introduction
VI. Research Gaps and Future Directions
VII. Conclusion - Redefining Finance: The Influence of Artificial Intelligence (AI) and Machine Learning (ML) / 2410.15951 / ISBN:https://doi.org/10.48550/arXiv.2410.15951 / Published by ArXiv / on (web) Publishing site
- What Is AI & ML
- Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
- Taxonomies of Harm Must be Vernacularized to be Operationalized
- Trustworthy XAI and Application / 2410.17139 / ISBN:https://doi.org/10.48550/arXiv.2410.17139 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Applications of XAI - Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
- 3 Benchmark
Supplementary Materials - Ethical Leadership in the Age of AI Challenges, Opportunities and Framework for Ethical Leadership / 2410.18095 / ISBN:https://doi.org/10.48550/arXiv.2410.18095 / Published by ArXiv / on (web) Publishing site
- Ethical Challenges Presented by AI
- Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
- Task Formulation
Prompt engineering
Glossary - The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
- IV. Detection Methods Based on Textual and
Multimodal Analysis for Text-to-Image Models
V. Datasets and Benchmarks - TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- 3 Results
- The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Methodology
4 Results
5 Discussion - The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Assessing the Current State of Self-Awareness in Artificial Intelligent Systems
5 The Runaway AGI Evolutionary Gap - Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Trends in advanced AI safety and trustworthiness standardization - Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
- 4 Study Design & Methodology
Appendices - Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations / 2410.23432 / ISBN:https://doi.org/10.48550/arXiv.2410.23432 / Published by ArXiv / on (web) Publishing site
- 4 Recommendations
Appendices - The Transformative Impact of AI and Deep Learning in Business: A Literature Review / 2410.23443 / ISBN:https://doi.org/10.48550/arXiv.2410.23443 / Published by ArXiv / on (web) Publishing site
- III. Literature Review: Current Applications
of AI and Deep Learning in Business
- Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
- Classical Assessment Validation Theory and Responsible AI
The Evolution of Responsible AI for Assessment - Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Results
5. Discussion
6. Conclusions - A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
- 2. Multimodal Interaction Across the Virtual Continuum
3. XR Applications: Expanding Multimodal Interactions Across Domains
4. Potential Risks and Ethical Challenges of XR and the Metaverse - I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
- Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models
/ 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Appendices
- Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 6 Directions for future research
- How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law / 2404.12762 / ISBN:https://doi.org/10.48550/arXiv.2404.12762 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Legal Requirements: Decision-Centric - A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
- II. Background and Technology
IV. Improving Algorithms for Med-LLMs
VII. Future Directions - The doctor will polygraph you now: ethical concerns with AI for fact-checking patients / 2408.07896 / ISBN:https://doi.org/10.48550/arXiv.2408.07896 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Methods - Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries / 2409.12197 / ISBN:https://doi.org/10.48550/arXiv.2409.12197 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
- Materials and methods
- Persuasion with Large Language Models: a Survey / 2411.06837 / ISBN:https://doi.org/10.48550/arXiv.2411.06837 / Published by ArXiv / on (web) Publishing site
- 4 Experimental Design Patterns
- The EU AI Act is a good start but falls short / 2411.08535 / ISBN:https://doi.org/10.48550/arXiv.2411.08535 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Results
4 Discussion - Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
- 3 Transformer Architecture
- Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Problem Statement: the Interface Dilemma
V. Multimodal Interaction
VI. Limitations, Challenges, and Future Directions for AI-Driven Interfaces
VII. Metrics for Evaluating AI-Driven Multimodal UIs - Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Intrinsic Bias
3. Extrinsic Bias
4. Bias Evaluation
5. Bias Mitigation
6. Ethical Concerns and Legal Challenges - Framework for developing and evaluating ethical collaboration between expert and machine / 2411.10983 / ISBN:https://doi.org/10.48550/arXiv.2411.10983 / Published by ArXiv / on (web) Publishing site
- 2. Method
- GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
4 Results
5 Discussion - Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. P2VAD with non -Identifiable Elements
VI. Evaluation Benchmarks and Metrics - Advancing Transformative Education: Generative AI as a Catalyst for Equity and Innovation / 2411.15971 / ISBN:https://doi.org/10.48550/arXiv.2411.15971 / Published by ArXiv / on (web) Publishing site
- 6 Ethical Implications of Generative AI in Education
- Good intentions, unintended consequences: exploring forecasting harms
/ 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
- 2 Harms in forecasting
3 Methods
4 Findings: typology of harm in forecasting
5 Discussion
Appendices - Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related Works
4. Visual Generation Experiment - Ethics and Artificial Intelligence Adoption / 2412.00330 / ISBN:https://doi.org/10.48550/arXiv.2412.00330 / Published by ArXiv / on (web) Publishing site
- V. Analysis and Results
- Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
- Emerging Ideas
- Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
- VI. Ethical Considerations
- From Principles to Practice: A Deep Dive into AI Ethics and Regulations / 2412.04683 / ISBN:https://doi.org/10.48550/arXiv.2412.04683 / Published by ArXiv / on (web) Publishing site
- III AI Ethics and the notion of AI as uncharted moral territory
- Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
- III AI Ethics and the notion of AI as uncharted moral territory
- Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
- 2 Methods
- Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
- 4 Classical Political Science Functions and Modern Transformations
6 Future Directions & Challenges - Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
- 2. Digital Citizenship: from Individualised to Stereotyped
Identities
4. Representation: Digital and AI Technologies in Modern Electoral Processes - Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
- Appendices
- CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
- Introduction
- Reviewing Intelligent Cinematography: AI research for camera-based video production / 2405.05039 / ISBN:https://doi.org/10.48550/arXiv.2405.05039 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Technical Background
3 Intelligent Cinematography in Production
Appendices - Shaping AI's Impact on Billions of Lives / 2412.02730 / ISBN:https://doi.org/10.48550/arXiv.2412.02730 / Published by ArXiv / on (web) Publishing site
- I. Putting Pragmatic AI in Context
II. Demystifying the Potential Impact on AI - Intelligent Electric Power Steering: Artificial Intelligence Integration Enhances Vehicle Safety and Performance / 2412.08133 / ISBN:https://doi.org/10.48550/arXiv.2412.08133 / Published by ArXiv / on (web) Publishing site
- III. AI Integration in EPS: Safety and Performance Enhancement
- AI Ethics in Smart Homes: Progress, User Requirements and Challenges / 2412.09813 / ISBN:https://doi.org/10.48550/arXiv.2412.09813 / Published by ArXiv / on (web) Publishing site
- 3 Smart Home Technologies and AI Ethics
5 AI Ethics from Technology's Perspective - Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
- Research Phases and AI Tools
Discussion - On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
- II. Study Design
- Bots against Bias: Critical Next Steps for Human-Robot Interaction / 2412.12542 / ISBN:https://doi.org/10.1017/9781009386708.023 / Published by ArXiv / on (web) Publishing site
- 2 Track: Robots against Bias
3 Track: Against Bias in Robots - Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
- 4 Clio for safety
6 Risks, ethical considerations, and mitigations - User-Generated Content and Editors in Games: A Comprehensive Survey / 2412.13743 / ISBN:https://doi.org/10.48550/arXiv.2412.13743 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Related Work
III. Categories of User-Generated Content
IV. User-Generated Content Editor
V. Discussion - Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
- IV. Applications
- Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
- VI. Comparative Analysis of Threat Modeling Frameworks for Autonomous Vehicles
- Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review / 2412.16369 / ISBN:https://doi.org/10.48550/arXiv.2412.16369 / Published by ArXiv / on (web) Publishing site
- III. Results
- Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
- 2 Literature Review
3 Methodology
4 Results
5 Discussion - Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Value Misalignment
4 Robustness to Attack
6 Autonomous AI Risks
8 Interpretability for LLM Safety
9 Technology Roadmaps / Strategies to LLM Safety in Practice - INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
- 3 Preliminaries
4 Method - Curious, Critical Thinker, Empathetic, and Ethically Responsible: Essential Soft Skills for Data Scientists in Software Engineering / 2501.02088 / ISBN:https://doi.org/10.48550/arXiv.2501.02088 / Published by ArXiv / on (web) Publishing site
- II. Background
IV. Findings - Human-centered Geospatial Data Science / 2501.05595 / ISBN:https://doi.org/10.48550/arXiv.2501.05595 / Published by ArXiv / on (web) Publishing site
- 2. Understanding Human Experiences
- Datasheets for Healthcare AI: A Framework for Transparency and Bias Mitigation / 2501.05617 / ISBN:https://doi.org/10.48550/arXiv.2501.05617 / Published by ArXiv / on (web) Publishing site
- 2. Literature Review
3. Developing an Improved Machine-Readable Datasheet - Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / on (web) Publishing site
- 3 Phase 1: Scoping Review
4 Phase 2: Focus Groups
Appendices - Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Designing AI Agents based on Moral Principles
4. Evaluating Moral Learning Agents
Appendix A. Other Games Involving Morality - Addressing Intersectionality, Explainability, and Ethics in AI-Driven Diagnostics: A Rebuttal and Call for Transdiciplinary Action / 2501.08497 / ISBN:https://doi.org/10.48550/arXiv.2501.08497 / Published by ArXiv / on (web) Publishing site
- 7 Conclusion
- Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv.2501.10453 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Results and Discussion
3 Conclusion
4 Method
Supplementary - Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv.2501.10484 / Published by ArXiv / on (web) Publishing site
- Related Works
Discussion and Conclusion - AI Toolkit: Libraries and Essays for Exploring the Technology and Ethics of AI / 2501.10576 / ISBN:https://doi.org/10.48550/arXiv.2501.10576 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv.2501.10685 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3- Content Creation and Personalization
7- Social Media and Community Engagement
8- Ethical Considerations in Marketing AI
10- Case Studies and Real-world Applications - Development of Application-Specific Large Language Models to Facilitate Research Ethics Review / 2501.10741 / ISBN:https://doi.org/10.48550/arXiv.2501.10741 / Published by ArXiv / on (web) Publishing site
- V. Discussion: Potential Benefits, Risks, and Replies
- Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond / 2501.11457 / ISBN:https://doi.org/10.48550/arXiv.2501.11457 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) / 2501.11705 / ISBN:https://doi.org/10.48550/arXiv.2501.11705 / Published by ArXiv / on (web) Publishing site
- Human services organizations and the responsible integration of AI: Considering ethics and
contextualizing risk(s)
- Deploying Privacy Guardrails for LLMs: A Comparative Analysis of Real-World Applications
/ 2501.12456 / ISBN:https://doi.org/10.48550/arXiv.2501.12456 / Published by ArXiv / on (web) Publishing site
- State of the Art
Deployment 1: Data and Model Factory
Comparison of Deployments and Discussion - A Critical Field Guide for Working with Machine Learning Datasets / 2501.15491 / ISBN:https://doi.org/10.48550/arXiv.2501.15491 / Published by ArXiv / on (web) Publishing site
- 1. Introduction to Machine Learning Datasets
3. Parts of a Dataset
4. Types of Datasets
5. Transforming Datasets
6. The Dataset Lifecycle
Endnotes - Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv.2501.16531 / Published by ArXiv / on (web) Publishing site
- 2. Background
3. Theoretical Framework
4. Methods
Appendices - The Third Moment of AI Ethics: Developing Relatable and Contextualized Tools / 2501.16954 / ISBN:https://doi.org/10.48550/arXiv.2501.16954 / Published by ArXiv / on (web) Publishing site
- Appendices
- Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
- 3 Methods
4 Findings
5 Discussion - DebiasPI: Inference-time Debiasing by Prompt Iteration of a Text-to-Image Generative Model / 2501.18642 / ISBN:https://doi.org/10.48550/arXiv.2501.18642 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Method
4 Experiments and Results - Constructing AI ethics narratives based on real-world data: Human-AI collaboration in data-driven visual storytelling / 2502.00637 / ISBN:https://doi.org/10.48550/arXiv.2502.00637 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
5 Discussion - Ethical Considerations for the Military Use of Artificial Intelligence in Visual Reconnaissance / 2502.03376 / ISBN:https://doi.org/10.48550/arXiv.2502.03376 / Published by ArXiv / on (web) Publishing site
- 3 Use Case 1 - Decision Support for Maritime Surveillance
- FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv.2502.03826 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
4. Methodologies
5. Experimental Protocol
6. Results
7. Discussions and Conclusions - Cognitive AI framework: advances in the simulation of human thought / 2502.04259 / ISBN:https://doi.org/10.48550/arXiv.2502.04259 / Published by ArXiv / on (web) Publishing site
- 3. Data Flow and Process Logic
- Open Foundation Models in Healthcare: Challenges, Paradoxes, and Opportunities with GenAI Driven Personalized Prescription / 2502.04356 / ISBN:https://doi.org/10.48550/arXiv.2502.04356 / Published by ArXiv / on (web) Publishing site
- III. State-of-the-Art in Open Healthcare LLMs and AIFMs
IV. Leveraging Open LLMs for Prescription: A Case Study - Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Vision Foundation Model Safety
3 Large Language Model Safety
4 Vision-Language Pre-Training Model Safety
5 Vision-Language Model Safety
6 Diffusion Model Safety
7 Agent Safety
8 Open Challenges - Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv.2502.07254 / Published by ArXiv / on (web) Publishing site
- Paper
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Appendices
- From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine
/ 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
5 Multimodal language models in medicine
7 Discussion
Appendices - Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Section 1: The Relational Norms Model
Section 2: Distinctive Characteristics of AI and Implications for Relational Norms
Section 3: Considerations and Future Directions for AI Governance and Design
Conclusion - AI and the Transformation of Accountability and Discretion in Urban Governance / 2502.13101 / ISBN:https://doi.org/10.48550/arXiv.2502.13101 / Published by ArXiv / on (web) Publishing site
- 2. Discretion, Accountability, and the Trade-off
- Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
- 3 Risk Factors
4 Implications - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background
4 Designing TrustGen, a Dynamic Benchmark Platform for Evaluating the Trustworthiness of GenFMs
5 Benchmarking Text-to-Image Models
7 Benchmarking Vision-Language Models
9 Trustworthiness in Downstream Applications
10 Further Discussion - Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
- II. Background and Challenges
III. ML/DL Applications in Surgical Tool Recognition
IV. ML/DL Applications in Surgical Workflow Analysis
V. ML/DL Applications in Surgical Training and Simulation
VI. Open Issues and Future Research Directions in Surgical Scene Understanding - Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives / 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / on (web) Publishing site
- 2 Background and Taxonomy
- Why do we do this?: Moral Stress and the Affective Experience of Ethics in Practice / 2502.18395 / ISBN:https://doi.org/10.48550/arXiv.2502.18395 / Published by ArXiv / on (web) Publishing site
- 4 Data collection and analysis
6 Discussion - Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
- 2. Methodology
- An LLM-based Delphi Study to Predict GenAI Evolution / 2502.21092 / ISBN:https://doi.org/10.48550/arXiv.2502.21092 / Published by ArXiv / on (web) Publishing site
- 2 Methods
- Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones / 2502.21248 / ISBN:https://doi.org/10.48550/arXiv.2502.21248 / Published by ArXiv / on (web) Publishing site
- 2 Defining Pre-Mortem AI Clones and Generative Ghosts
- Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
- Abstract
4. Analysis and Results - Digital Dybbuks and Virtual Golems: AI, Memory, and the Ethics of Holocaust Testimony / 2503.01369 / ISBN:https://doi.org/10.48550/arXiv.2503.01369 / Published by ArXiv / on (web) Publishing site
- Holocaust survivor testimonies: past, present, and possible futures
The permissibility of digital duplicates in Holocaust remembrance and education
Conclusions - Jailbreaking Generative AI: Empowering Novices to Conduct Phishing Attacks / 2503.01395 / ISBN:https://doi.org/10.48550/arXiv.2503.01395 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Core Concepts of Visual Language Modeling
IV. VLM Benchmarking and Evaluations
V. Challenges and Limitations
VI. Opportunities and Future Directions - Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background, History and Resources
3 Personality Computing Systems
4 Discussion and Conclusion - AI Automatons: AI Systems Intended to Imitate Humans / 2503.02250 / ISBN:https://doi.org/10.48550/arXiv.2503.02250 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background & Related Work
3 Conceptual Framework for AI Automatons
4 Discussion and Concluding Remarks - Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
- 2 LLM Hallucinations in Medicine
5 Mitigation Strategies
9 Regulatory and Legal Considerations for AI Hallucinations in Healthcare
Appendices - Generative AI in Transportation Planning: A Survey / 2503.07158 / ISBN:https://doi.org/10.48550/arXiv.2503.07158 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Mapping out AI Functions in Intelligent Disaster (Mis)Management and AI-Caused Disasters / 2502.16644 / ISBN:https://doi.org/10.48550/arXiv.2502.16644 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Intelligent Disaster Management (IDM)
3. Intelligent Disaster Mismanagement (IDMM) - AI Governance InternationaL Evaluation Index (AGILE Index)
/ 2502.15859 / ISBN:https://doi.org/10.48550/arXiv.2502.15859 / Published by ArXiv / on (web) Publishing site
- Executive Summary
2. Overview
3. Analysis and Observations - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- Appendices
- Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework / 2503.09969 / ISBN:https://doi.org/10.48550/arXiv.2503.09969 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
Appendices - MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
- 2 Literature Review
- LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions
/ 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
- 2 Methodology
Appendices - Synthetic Data for Robust AI Model Development in Regulated Enterprises / 2503.12353 / ISBN:https://doi.org/10.48550/arXiv.2503.12353 / Published by ArXiv / on (web) Publishing site
- Synthetic Data Generation for
Enterprise AI Development
- A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
- Literature Review
Prompt Engineering: A Double-Edged Sword
Risks of Malicious Use of Step-Around Prompting
Ethics of Step-Around Prompting - Advancing Human-Machine Teaming: Concepts, Challenges, and Applications
/ 2503.16518 / ISBN:https://doi.org/10.48550/arXiv.2503.16518 / Published by ArXiv / on (web) Publishing site
- 4 Evaluation Methodologies of Human-Machine Teaming Systems (HMTSS)
- Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Materials and methods
3 Results
4 Discussion - Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI / 2503.16558 / ISBN:https://doi.org/10.48550/arXiv.2503.16558 / Published by ArXiv / on (web) Publishing site
- III. Case Study: PBL for Biomedical AI Education
- Three Kinds of AI Ethics / 2503.18842 / ISBN:https://doi.org/10.48550/arXiv.2503.18842 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Ethics of AI - HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
- 3 Standards and Guidelines
4 Proposed Methodology for AI Assessment - Generative AI and News Consumption: Design Fictions and Critical Analysis / 2503.20391 / ISBN:https://doi.org/10.48550/arXiv.2503.20391 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
4 Results
5 Discussion - AI Family Integration Index (AFII): Benchmarking a New Global Readiness for AI as Family / 2503.22772 / ISBN:https://doi.org/10.48550/arXiv.2503.22772 / Published by ArXiv / on (web) Publishing site
- 4. AI-Family Integration (AFI) – Benchmarking with AI Policy
Penetration and Traditional AI Index
- BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models
/ 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Proposed Framework - BEATS
3 Key Findings
7 Appendix - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice / 2504.00797 / ISBN:https://doi.org/10.48550/arXiv.2504.00797 / Published by ArXiv / on (web) Publishing site
- 2 Key Concepts and Definitions
3 Existing Scholarship in AI Ethics and Sustainability
4 Transversal Issues in AI Ethics and Sustainability - Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
/ 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Methodology
4. Taxonomy of AI Privacy and Ethical Incidents
5. Discussion
6. Conclusion
Appendices - Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
- 2 Legal Background
- Systematic Literature Review: Explainable AI Definitions and Challenges in Education / 2504.02910 / ISBN:https://doi.org/10.48550/arXiv.2504.02910 / Published by ArXiv / on (web) Publishing site
- Methodology
- Ethical AI on the Waitlist: Group Fairness Evaluation of LLM-Aided Organ Allocation / 2504.03716 / ISBN:https://doi.org/10.48550/arXiv.2504.03716 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Results
Appendix - Assessing employment and labour issues implicated by using AI
/ 2504.06322 / ISBN:https://doi.org/10.48550/arXiv.2504.06322 / Published by ArXiv / on (web) Publishing site
- 2. Approach 1: Back to the thick of it
3. Approach 2: Towards a Relational Understanding of Tasks
4. Discussion: Assessing Impact Using Approach 1 and 2 - We Are All Creators: Generative AI, Collective Knowledge, and the Path Towards Human-AI Synergy / 2504.07936 / ISBN:https://doi.org/10.48550/arXiv.2504.07936 / Published by ArXiv / on (web) Publishing site
- 2 The Connectionist Nature of Generative AI: Beyond the Black Box
- ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- Appendices
- Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
- 2 An overview of the generative AI evaluation landscape
- AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias / 2407.19655 / ISBN:https://doi.org/10.48550/arXiv.2407.19655 / Published by ArXiv / on (web) Publishing site
- 1 Applications of AI in Healthcare
2 Fairness Concerns in Healthcare - Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation
/ 2502.05151 / ISBN:https://doi.org/10.48550/arXiv.2502.05151 / Published by ArXiv / on (web) Publishing site
- 3 AI Support for Individual Topics and Tasks
4 Ethical Concerns
Appendix - Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future / 2502.08650 / ISBN:https://doi.org/10.48550/arXiv.2502.08650 / Published by ArXiv / on (web) Publishing site
- 2 Responsible Generative AI
5 Responsible AI Applications Across Domains - >Publishing site
- What is Being Evaluated?
Who Evaluates and How?
Other Considerations
Case Studies - Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
- 7. Future Research Directions
- Designing AI-Enabled Countermeasures to Cognitive Warfare / 2504.11486 / ISBN:https://doi.org/10.48550/arXiv.2504.11486 / Published by ArXiv / on (web) Publishing site
- 2.0 Cognitive Warfare in Practice
- Framework, Standards, Applications and Best practices of Responsible AI : A Comprehensive Survey / 2504.13979 / ISBN:https://doi.org/10.48550/arXiv.2504.13979 / Published by ArXiv / on (web) Publishing site
- 2. Trustworthy AI Framework
9. Challenges and Best practices of RAI - Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions
/ 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / on (web) Publishing site
- 3 Results
Appendix - Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- 2. The paradigm shifts in AI for education: From expert systems to general intelligence
3. Challenges of current AI methods in education: The Elephant in the room
4. Hybrid human-AI methods for responsible AI for education
5. Conclusion
References - Approaches to Responsible Governance of GenAI in Organizations / 2504.17044 / ISBN:https://doi.org/10.48550/arXiv.2504.17044 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Identified Concerns and Risks
IV. Solutions to Address Concerns - Auditing the Ethical Logic of Generative AI Models / 2504.17544 / ISBN:https://doi.org/10.48550/arXiv.2504.17544 / Published by ArXiv / on (web) Publishing site
- Seven Contemporary LLMs
- AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
5 Discussion - A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption / 2504.19179 / ISBN:https://doi.org/10.48550/arXiv.2504.19179 / Published by ArXiv / on (web) Publishing site
- 2. Fundamentals of Trustworthy AI
3. An overview of the AI ecosystem in the medical field: processes, data, and stakeholders
4. Design framework for medical AI systems
6. Challenges towards the practical adoption of the design framework in healthcare
7. Conclusions and outlook - The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach
/ 2504.19255 / ISBN:https://doi.org/10.48550/arXiv.2504.19255 / Published by ArXiv / on (web) Publishing site
- Problem Statement
Findings - Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Foundations of AI Awareness
3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness
6 Conclusion - AI Awareness / 2504.20084 / ISBN:https://doi.org/10.48550/arXiv.2504.20084 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Foundations of AI Awareness
3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness
6 Conclusion - Federated learning, ethics, and the double black box problem in medical AI
/ 2504.20656 / ISBN:https://doi.org/10.48550/arXiv.2504.20656 / Published by ArXiv / on (web) Publishing site
- 2 What is federated learning?
- From Texts to Shields: Convergence of Large Language Models and Cybersecurity / 2505.00841 / ISBN:https://doi.org/10.48550/arXiv.2505.00841 / Published by ArXiv / on (web) Publishing site
- 3 LLM Agent and Applications
5 LLM Interpretability, Safety, and Security - LLM Ethics Benchmark: A Three-Dimensional Assessment System for Evaluating Moral Reasoning in Large Language Models / 2505.00853 / ISBN:https://doi.org/10.48550/arXiv.2505.00853 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
5 Experimental Results - Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
- IV. Applications and Approaches
VI. Datasets for Emotion Management and Sentiment Analysis
IX. Ethical and Societal Considerations - Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
- 3 Three-Dimensional Safety Taxonomy for
LLM Risk Mitigation
6 Results - Ethical AI in the Healthcare Sector: Investigating Key Drivers of Adoption through the Multi-Dimensional Ethical AI Adoption Model (MEAAM) / 2505.02062 / ISBN:https://doi.org/10.9734/ajmah/2025/v23i51228 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Literature Review and Theoretical Mechanism - GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions / 2505.05523 / ISBN:https://doi.org/10.48550/arXiv.2505.05523 / Published by ArXiv / on (web) Publishing site
- 4. Findings
5. Ethics, Opportunities and Future Directions - Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
- Appendices
- AI and Generative AI Transforming Disaster Management: A Survey of Damage Assessment and Response Techniques / 2505.08202 / ISBN:https://doi.org/10.48550/arXiv.2505.08202 / Published by ArXiv / on (web) Publishing site
- Abstract
II Domain-Specific Literature Review
III Data Modalities and Generative AI Applications
V Privacy, Security and Deployment Challenges
VI Future Prospects and Conclusion - Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / on (web) Publishing site
- IV Persuasive Procedures in Generative AI
- WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models / 2505.09595 / ISBN:https://doi.org/10.48550/arXiv.2505.09595 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Methodology and System Design
4 Benchmarking and Intervention Strategies
5 Results
6 Discussion and Potential Limitations - Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Further Discussions - AI LEGO: Scaffolding Cross-Functional Collaboration in Industrial Responsible AI Practices during Early Design Stages / 2505.10300 / ISBN:https://doi.org/10.48550/arXiv.2505.10300 / Published by ArXiv / on (web) Publishing site
- Appendices
- Formalising Human-in-the-Loop: Computational Reductions, Failure Modes, and Legal-Moral Responsibility / 2505.10426 / ISBN:https://doi.org/10.48550/arXiv.2505.10426 / Published by ArXiv / on (web) Publishing site
- Introduction
Computational Reductions for HITL
HITL Failure Modes
Legal-Moral Responsibility - Let's have a chat with the EU AI Act / 2505.11946 / ISBN:https://doi.org/10.48550/arXiv.2505.11946 / Published by ArXiv / on (web) Publishing site
- III System Design and Architecture
- Sentience Quest: Towards Embodied, Emotionally Adaptive, Self-Evolving, Ethically Aligned Artificial General Intelligence / 2505.12229 / ISBN:https://doi.org/10.48550/arXiv.2505.12229 / Published by ArXiv / on (web) Publishing site
- 4 Preliminary Results and Evaluation
- From Automation to Autonomy: A Survey on Large Language Models in Scientific Discovery / 2505.13259 / ISBN:https://doi.org/10.48550/arXiv.2505.13259 / Published by ArXiv / on (web) Publishing site
- 5 Level 3. LLM as Scientist (Table A3)
- Kaleidoscope Gallery: Exploring Ethics and Generative AI Through Art / 2505.14758 / ISBN:https://doi.org/10.48550/arXiv.2505.14758 / Published by ArXiv / on (web) Publishing site
- 4 Results
- AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals / 2505.15365 / ISBN:https://doi.org/10.48550/arXiv.2505.15365 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Methodology
4 Results - Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies / 2505.15851 / ISBN:https://doi.org/10.48550/arXiv.2505.15851 / Published by ArXiv / on (web) Publishing site
- 3 Pilot Studies
- Advancing the Scientific Method with Large Language Models: From Hypothesis to Discovery / 2505.16477 / ISBN:https://doi.org/10.48550/arXiv.2505.16477 / Published by ArXiv / on (web) Publishing site
- Current use of LLMs – From Specialised Scientific Copilots to LLM-
assisted Scientific Discoveries
Toward Large Language Models as Creative Engines for Fundamental Science
Challenges and Opportunities - Cultural Value Alignment in Large Language Models: A Prompt-based Analysis of Schwartz Values in Gemini, ChatGPT, and DeepSeek / 2505.17112 / ISBN:https://doi.org/10.48550/arXiv.2505.17112 / Published by ArXiv / on (web) Publishing site
- Introduction
- A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit / 2505.17165 / ISBN:https://doi.org/10.48550/arXiv.2505.17165 / Published by ArXiv / on (web) Publishing site
- 2 Background and Rationale
3 Process
5 Reflections, Insights, and Recommendations - SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use / 2505.17332 / ISBN:https://doi.org/10.48550/arXiv.2505.17332 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- TEDI: Trustworthy and Ethical Dataset Indicators to Analyze and Compare Dataset Documentation / 2505.17841 / ISBN:https://doi.org/10.48550/arXiv.2505.17841 / Published by ArXiv / on (web) Publishing site
- 3 Analysis of Multimodal Datasets
4 Impact of Data Sourcing on Trustworthy and Ethical Indicators
Appendix - AI Literacy for Legal AI Systems: A practical approach / 2505.18006 / ISBN:https://doi.org/10.48550/arXiv.2505.18006 / Published by ArXiv / on (web) Publishing site
- 2. Legal AI systems: A definition
- Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI
/ 2505.20304 / ISBN:https://doi.org/10.48550/arXiv.2505.20304 / Published by ArXiv / on (web) Publishing site
- 3 Operationalizing Ethical Governance: The Three-Stage LoBOX Framework Pathway
for Managing Opacity
- Making Sense of the Unsensible: Reflection, Survey, and Challenges for XAI in Large Language Models Toward Human-Centered AI / 2505.20305 / ISBN:https://doi.org/10.48550/arXiv.2505.20305 / Published by ArXiv / on (web) Publishing site
- 7 Designing Actionable and Governable XAI: Challenges and Research Frontiers
- Can we Debias Social Stereotypes in AI-Generated Images? Examining Text-to-Image Outputs and User Perceptions / 2505.20692 / ISBN:https://doi.org/10.48550/arXiv.2505.20692 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Related Work
3 Study Design and Data
4 Methods
5 Results
6 Discussion
7 Conclusion
Appendix - Simulating Ethics: Using LLM Debate Panels to Model Deliberation on Medical Dilemmas / 2505.21112 / ISBN:https://doi.org/10.48550/arXiv.2505.21112 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Human-Centered Human-AI Collaboration (HCHAC) / 2505.22477 / ISBN:https://doi.org/10.48550/arXiv.2505.22477 / Published by ArXiv / on (web) Publishing site
- 2. An Overview of Human-AI Collaboration
4. Current Research Agenda of HAC
5. Human-Centered Human-AI Collaboration - Toward Effective AI Governance: A Review of Principles / 2505.23417 / ISBN:https://doi.org/10.48550/arXiv.2505.23417 / Published by ArXiv / on (web) Publishing site
- III Results
- SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM Agents / 2505.23559 / ISBN:https://doi.org/10.48550/arXiv.2505.23559 / Published by ArXiv / on (web) Publishing site
- 3 Method
4 SciSafetyBench
5 Experiment
Appendix - Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side / 2505.23733 / ISBN:https://doi.org/10.48550/arXiv.2505.23733 / Published by ArXiv / on (web) Publishing site
- 6 Discussion
- Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking / 2505.23930 / ISBN:https://doi.org/10.48550/arXiv.2505.23930 / Published by ArXiv / on (web) Publishing site
- Results
- Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Methods
4 Findings
5 Discussion - Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products / 2506.00080 / ISBN:https://doi.org/10.48550/arXiv.2506.00080 / Published by ArXiv / on (web) Publishing site
- 4. Results
- Feeling Guilty Being a c(ai)borg: Navigating the Tensions Between Guilt and Empowerment in AI Use / 2506.00094 / ISBN:https://doi.org/10.48550/arXiv.2506.00094 / Published by ArXiv / on (web) Publishing site
- 2. State of the Art
- DeepSeek in Healthcare: A Survey of Capabilities, Risks, and Clinical Applications of Open-Source Large Language Models / 2506.01257 / ISBN:https://doi.org/10.48550/arXiv.2506.01257 / Published by ArXiv / on (web) Publishing site
- Future Directions
- Machine vs Machine: Using AI to Tackle Generative AI Threats in Assessment / 2506.02046 / ISBN:https://doi.org/10.48550/arXiv.2506.02046 / Published by ArXiv / on (web) Publishing site
- 4. Theoretical Framework for Vulnerability Scoring
- HADA: Human-AI Agent Decision Alignment Architecture / 2506.04253 / ISBN:https://doi.org/10.48550/arXiv.2506.04253 / Published by ArXiv / on (web) Publishing site
- 2 Related Work: Emerging LLM Software Agents
- Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs / 2506.05887 / ISBN:https://doi.org/10.48550/arXiv.2506.05887 / Published by ArXiv / on (web) Publishing site
- 2 Background on Explainable AI and Audience-Centered Explanations
3 A Multilevel Framework for Audience-Aware Explainability
5 Conclusions - On the Ethics of Using LLMs for Offensive Security / 2506.08693 / ISBN:https://doi.org/10.48550/arXiv.2506.08693 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Methodology
4 Results
5 Discussion - Where's the Line? A Classroom Activity on Ethical and Constructive Use of Generative AI in Physics
/ 2506.00229 / ISBN:https://doi.org/10.48550/arXiv.2506.00229 / Published by ArXiv / on (web) Publishing site
- Themes from Student Reflections