_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: trained


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: trained

Bibliography items where occurs: 475
The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
Report highlights
Chapter 2 Technical Performance
Chapter 3 Technical AI Ethics
Appendix


AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
4 Results


The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
5 Evaluation of Ethical Principle Implementations


A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
Introductionn
1. Problems with AI
2. Defining ethical AI


ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
4 Deployment and Evaluation
A Appendix: Interview Protocol


What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
2 Background
3 Methodology
4 Proposed competency framework for responsible AI practitioners


GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
Abstract


Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
II. Underlying Aspects
III. Interactions between Aspects


A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / on (web) Publishing site
4. Corporate Self-Governance
8. Ethics and Trust Lenses in the Multilevel Framework


From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts / 2307.15452 / ISBN:https://doi.org/10.48550/arXiv.2307.15452 / Published by ArXiv / on (web) Publishing site
2. Method


The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
4. Ethical Implications of AI Value Chains


Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
3 Enhancing Protection for the General Public and Vulnerable Groups


From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
What is Generative Artificial Intelligence?


Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
Bias and Discrimination of Training Data


Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
5 Crowdsourced safety mechanism


Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
3 Taxonomy of ethical principles


Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
Introduction
Computers, Autonomy and Accountability
AI Workplace Health and Safety Framework


A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor
8 Regulations and Ethical Use


Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
2 Background
3 LLM-based penetration testing
4 Discussion


Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related works
3 Targeted data augmentation
4 Experiments
5 Conclusions


Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Comprehensive review of state-of-the-art LLMs
VI. Solution architecture for privacy-aware and trustworthy conversational AI


The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
4 Integrating red teaming, blue teaming, and ethics with violet teaming
6 A pathway for balanced AI innovation
7 Violet teaming to address dual-use risks of AI in biotechnology


Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
4 Experiment
Limitations


The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
2 Key AI technology in financial services
3 Benefits of AI use in the finance sector
4 Threaths & potential pitfalls
5 Challenges
6 Regulation of AI and regulating through AI


Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
2 Black box and lack of transparency
3 Bias and fairness
4 Human-centric AI


The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
2. Related work
3. ChatGPT Training Process
4. Methods
5. Discussion
6. Conclusion


Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
Introduction
Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 2 Art Data and Human–Machine Interaction in Art Creation
Part 2 - 3 Photogrammetry / Volumetric Capture
Part 3 - 2 Machine Artist Models
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework
Part 5 - 1 Authorship and Ownership of AI-generated Works of Artt
Part 5 - 2 Algorithmics Bias in Art Generation


FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Fairness - For Equitable AI in Medical Imaging
4. Traceability - For Transparent and Dynamic AI in Medical Imaging
6. Robustness - For Reliable AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging


The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
Abstract
4 Experiments
General References
C Case Outcome Task Description


EALM: Introducing Multidimensional Ethical Alignment in Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Experiments


Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
IV. Attack Surfaces


Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Method
4 Taxonomy of AI Privacy Risks


ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
Theoretical Impact of LLMs on Information Operations
ClausewitzGPT and Modern Strategy


A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. What LLMs can do for healthcare? from fundamental tasks to advanced applications
3. From PLMs to LLMs for healthcare
4. Usage and data for healthcare LLM
5. Improving fairness, accountability, transparency, and ethics


Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
3 LLMs: Risk and Uncertainty


Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
5. Ethical Issues of AI and Robotics in AEC Industry
7. Future Research Direction


Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
4. Technical Risks


Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
7. Issues


The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
4. A Holistic Framework


An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
2 Datasets and Methods
3 Results


AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
3. AI-powered personalized learning: Customized learning experiences for learners
5. AI-powered assessment and evaluation


Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
3. Proposed Novel Topics in an Ethics of AI Belief
4. Nascent Extant Work that Falls Within the Ethics of AI Belief


A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
3 Ethical datasets and algorithm development guidelines
5 Ethical guidelines for medical AI model deployment


Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
4 Process Patterns


FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
FUTURE-AI GUIDELINE


Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
Appendix A Data Details


Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 AI feedback on specific problematic AI traits
3 Generalization from a Simple Good for Humanity Principle
4 Reinforcement Learning with Good-for-Humanity Preference Models
5 Related Work
6 Discussion
A Model Glossary
B Trait Preference Modeling
C General Prompts for GfH Preference Modeling
D Generalization to Other Traits
E Response Diversity and the Size of the Generating Model
F Scaling Trends for GfH PMs
G Over-Training on Good for Humanity
H Samples
I Responses on Prompts from PALMS, LaMDA, and InstructGPT


The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
4 Findings


Systematic AI Approach for AGI: Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Systematic AI Approach for AGI with Self-Learning Architecture
7 Pursuit of the “Universal AGI Architecture” versus “Systematic AGI”


AI Alignment and Social Choice: Fundamental Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
4 Implications for AI Governance and Policy


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen


Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
5 The Epistemic Condition


Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a focus group report / 2311.00903 / ISBN:https://doi.org/10.48550/arXiv.2311.00903 / Published by ArXiv / on (web) Publishing site
Broader educational preparedness for work in AI Cybersecurity


LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
4 The Moral Model


Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
Introduction


Towards Effective Paraphrasing for Information Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
4 Evaluation


Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Overview of ChatGPT and its capabilities
3 Transformers and pre-trained language models
4 Applications of ChatGPT in real-world scenarios
5 Advantages of ChatGPT in natural language processing
6 Limitations and potential challenges
7 Ethical considerations when using ChatGPT
8 Prompt engineering and generation
10 Future directions for ChatGPT in vision domain


Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
II. Sources of bias in AI
III. Impacts of bias in AI
IV. Mitigation strategies for bias in AI
VI. Mitigation strategies for fairness in AI


Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
2 Related Work


A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
II. Introduction
VI. 2015: birth of the transformer
VII. The second wave in 2017: rise of RL
VIII. The third wave 2018: the rise of transformers
X. 2020-2021: the rise of LLMS


Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
4 Findings
5 Discussion


She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes


Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control / 2311.08943 / ISBN:https://doi.org/10.48550/arXiv.2311.08943 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Safety


How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
4 Experiments


Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
4 Related Work


Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Chatbots Background and Scope of Research
3. Chatbot approaches overview: Taxonomy of existing methods
4. ChatGPT


Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
6 Limitations


Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Proposed Process


Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
4 Findings
7 Limitations


GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
Abstract
II. Background
III. Approach: capturing and representing heuristics behind GPT's decision-making process


Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
Requiring adverse impact statements for RAI research is long overdue
Concluding Reflections


Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
II. Education and LLMS
V. Key points in LLMSEDU
VI. Challenges and future directions


The Rise of Creative Machines: Exploring the Impact of Generative AI / 2311.13262 / ISBN:https://doi.org/10.48550/arXiv.2311.13262 / Published by ArXiv / on (web) Publishing site
IV. Risks of generative AI


Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Works
3 Methodology
4 Results and Discussion


Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
Results
Conclusion


Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
Introduction
Overview of Societal Biases in GAI Models
Discussion
Conclusion


Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
1. Introduction: The Role of Algorithms in Protecting Privacy
4. Addressing bias, transparency, and accountability
7. Establishing responsible AI governance and oversight


Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
V. Potential harms and mitigation


Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
2 Privacy and data protection
3 Transparency and explainability
4 Fairness and equity


Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
III. The rise of large AI models
V. Technical defense mechanisms


Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
3 Mapping Challenges throughout the Data Lifecycle


From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
System Evaluation and Results


Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
2. The pitfalls in detecting generative AI output


Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
3 Reasoning
6 Measuring intelligence
7 Mathematically modeling intelligence
11 Control of intelligence
12 Large language models and Generative AI


RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
4 Results & analysis


Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
2 Risks of Misuse for Artificial Intelligence in Science
3 Control the Risks of AI Models in Science
Appendix C Detailed Implementation of SciGuard


Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
Data Collection
Study 3: Implications for Responsible AI
A Appendix


The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
Conclusion


Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
Artificial intelligence – concept and ethical background


Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
3 Materials and methods
A Extended Survey Results
C Full survey questions


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Problem Formulation


Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
Abstract
V. Discussion


Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
Experimental Study
Results
Discussion
Methods
Supplementary Material


Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
7. Evaluation metrics and performance benchmarks


Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. LLMs in cognitive and behavioral psychology
3. LLMs in clinical and counseling psychology
6. LLMs as research tools in psychology


Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
2. The generation of synthetic data
3. The usage of synthetic data


MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
II. Related work
V. Evaluation
VI. Discussion and future work


AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
III. Methods
IV. Results


Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
Results
Discussion


Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
Contents / List of figures / List of tables / Acronyms
I Understanding bias - 2 Bias and moral framework in AI-based decision making
3 Bias on demand: a framework for generating synthetic data with bias
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
8 Fairview: an evaluative AI support for addressing fairness
9 Towards fairness through time


FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Discussion


Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
3. Use cases representing different image data types and their challenges and status for sharing
4. Towards global image data sharing


Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
1 The “Triple-Too” problem of AI ethics
3 Five specific goals and action-guiding strategies for ethical AI use in research practices


A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
2 Related work
5 Towards evaluation of RAI tool effectiveness


Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Generation
3 Detection
5 Discussion


Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
3 Beyond technical dimensions


Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
2. Related literature
4. Findings


Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
3. Methodology
4. Discussion
C. ROSE: Tool and Data ResOurces to Explore the Instability of SEntiment Analysis Systems


Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
4 Recommendations to address threats posed by crossover AI technology


(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
2 Related work and our approach
4 Results
A Provided AI response strategies and examples


Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
Discussion


How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Results


I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
2 Related Work


Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
3 Results


Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
Abstract
3 Management-based Regulation and Human-Guided Training
4 Techniques of Human-Guided Training


User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
4 Current Taxonomy


Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
V. Processual Elements
VII. Discussions


Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
2 Related Work


Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Enhanced Performance of Free-Formed AI Collectives
5. Open Challenges for Free-Formed AI Collectives


The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
2 The EU AI Act


Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
6 Discussion


Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
II. The AI-Powered Development Life-Cycle in Autonomous Vehicles
IV. AI’S Role in the Emerging Trend of Internet of Things (IOT) Ecosystem for Autonomous Vehicles


The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
Abstract
Part 1. Study design
Part 5. Interpretability of generative models


Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
III. The AI-Enhanced CTI Processing Pipeline
IV. Challenges and Considerations


A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI
5 Applications
6 Conclusion


AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
Abstract
2. What is AGI


Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
2. Related Work


Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
3. Analysis


Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
7 Use Cases and Applications


Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Cyber Defence


Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
3 Method


Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
3. Findings


Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
Methodology
Web Appendix A: Analysis of the Disinformation Manipulations


The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
1 Context
2 Trustworthy AI Too Many Definitions or Lack Thereof?
3 Complexities and Challenges
4 AI Regulation: Current Global Landscape
5 Risk
6 Bias and Fairness
7 Explainable AI as an Enabler of Trustworthy AI
8 Implementation Framework
9 A Few Suggestions for a Viable Path Forward
A Appendix


Analyzing Potential Solutions Involving Regulation to Escape Some of AI's Ethical Concerns / 2403.15507 / ISBN:https://doi.org/10.48550/arXiv.2403.15507 / Published by ArXiv / on (web) Publishing site
A Possible Solution to These Concerns With Business Self-Regulation


The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
3 Conceptualizing Fairness and Bias in ML
4 Practical cases of unfairness in real-world setting
5 Ways to mitigate bias and promote Fairness


Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
1 Introduction and Related Work
4 RQ2: How Do AI Ethics Discussions Unfold while Playing a Game Oriented toward Speculative Critique?
5 Discussion


AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
6. Large Language Models (LLMs) - Introduction
8. Conclusions


Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
3 Fine-Tuned Large Language Models in Various Countries and Regions
5 Data Resources for Large Language Models in Law


A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 What is a Language Model?
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models
6 Model Tuning
7 Model Evaluation and Benchmarking


Balancing Progress and Responsibility: A Synthesis of Sustainability Trade-Offs of AI-Based Systems / 2404.03995 / ISBN:https://doi.org/10.48550/arXiv.2404.03995 / Published by ArXiv / on (web) Publishing site
II. Background and Related Work
IV. Results
V. Discussion


Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
II. Preliminaries
III. Proposed Design: IBIS
IV. Detailed Construction
VI. Evaluation


Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
A Primer
Rebooting Machine Ethics
Language Model Agents in Society


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift
4 Assurance
6 Conclusion


Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
6 Conclusions: Towards Humble Technical Practices


On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
Ethical considera5ons


PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background and Related Work
3 Methodology
4 Evaluation
5 Conclusion


Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
II. Literature Review


Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 The Robots at Issue
6 Posthumanism


Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
4 European Union Artificial Intelligence Act


Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
2 Method
3 Results
4 Discussin and Implications


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Definition of LLM Supply Chain
3 LLM Infrastructure
4 LLM Lifecycle
5 Downstream Ecosystem


The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Audit the process, not just the product
4 4 Auditing standards body, not standard audits


Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
4 Alternatives to AI as Agent


AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance / 2404.14660 / ISBN:https://doi.org/10.48550/arXiv.2404.14660 / Published by ArXiv / on (web) Publishing site
1 Technical assessments require an AI expert to complete — and we don’t have enough experts


Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
IV. Criteria for the Selection of Fairness Methods


War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
3 Lessons from History: War Elephants
4 Discussion
5 Conclusions


Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
2 Related Work
3 Method
4 Findings


Towards an Ethical and Inclusive Implementation of Artificial Intelligence in Organizations: A Multidimensional Framework / 2405.01697 / ISBN:https://doi.org/10.48550/arXiv.2405.01697 / Published by ArXiv / on (web) Publishing site
2 How can organizations participate


A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Finance
4 Medicine and Healthcare
5 Law
6 Ethics


AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
3. AWS Proliferation and Threats to Academic Research
4. Policy Recommendations


Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
Abstract


A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
Glossary of Terms
Executive Summary
1. Introduction
3. A Spectrum of Scenarios of Open Data for Generative AI
Appendix


Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
2 Theoretical Framework
5 Discussion and conclusions


Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
V. Fairness of AIGC in 6G Network


Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
2 Related Work


Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
4 Towards Contextualized, Politically Legitimate, and Social Intelligence


Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
3 Overview of Speech Generation


The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
4. Experiments


Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. What Are the Collective Decision Problems and their Alternatives in this Context?
5. What Is the Format of Human Feedback?
6. How Do We Incorporate Diverse Individual Feedback?
10. Conclusion


A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Materials
3 Results


Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
Appendix S: Multiple Adversarial LLMs
Appendix C: Z. Sayre to F. S. Fitzgerald w/ Mixed Emotions


Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
7 Conclusion


When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 RQ1: What Happens When AI Eats Itself ?
5 Conclusions and Outlook


Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2.MT Critical Errors
3. Data Compiling and Annotation
5.Quality Metrics Performance


The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
4 The Narrow Depth of Industry’s Responsible AI Research
S1 Additional Analyses on Engagement Analysis


Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
4 Conducting the Pilots


A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
III. Vulnerability Assessment
IV. Network Security
VII. Cyber Security Operations Automation
VIII. Ethical LLMs
IX. Challenges and Open Problems


Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
Additional material


The ethical situation of DALL-E 2 / 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
2 Understanding what can DALL-E 2 actually do
4 Following the RRI, (Responsible research innovation) principles


The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Anticipated AI Use for Children


Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
2. Related Work


The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
Paper


Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Mitigating (Unfair) Bias
3 Secure AI in EO: Focusing on Defense Mechanisms, Uncertainty Modeling and Explainability
4 Geo-Privacy and Privacy-preserving Measures
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO


Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
3 Framework
5 Final Remarks


Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
4 Comparative Analysis of Pre-Trained Models.
5 Discussion and further research


How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
I. Description of Method/Empirical Design
Figures and tables


Evaluating AI fairness in credit scoring with the BRIO tool / 2406.03292 / ISBN:https://doi.org/10.48550/arXiv.2406.03292 / Published by ArXiv / on (web) Publishing site
1 Introduction


Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
3. Related Work


MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Experiments


Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
2. Related Work
6. Discussion


Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
4 DAMAS: A MAS Framework for Deception Analysis


The Impact of AI on Academic Research and Publishing / 2406.06009 / Published by ArXiv / on (web) Publishing site
Introduction
AI in Editorial Processes


An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
2 Theoretical Background
5 Discussion


The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Why Ethics Matter in LLM Attacks?


Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
4 Global Regulatory Landscape of AI
5 Generative AI: The New Frontier


Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
2 Fairness and AI
3 Assuring fairness across the AI lifecycle
4 Assuring AI fairness in healthcare


Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
4 Results
5 Limitations


Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Foundations and Integration of SI and LLM
III. Federated LLMs for Smarm Intelligence


Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
II. Selection of application
III. Analysis
Aappendix A Societal aspects
Appendix C Algorithmic / technical aspects


Justice in Healthcare Artificial Intelligence in Africa / 2406.10653 / ISBN:https://doi.org/10.48550/arXiv.2406.10653 / Published by ArXiv / on (web) Publishing site
Abstract
3. Ensuring Equitable Access to AI Technologies
5. Promoting Global Solidarity
7. Addressing Bias and Enforcing Fairness
Conclusion


Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
2 Large Language Model Risks


Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health / 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
I. INTRODUCTION
II. RECENT ADVANCEMENTS IN LARGE LANGUAGE MODELS


Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
2 RELATED WORK
4 RESULTS
5 DISCUSSION AND IMPLICATIONS


AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
2 Background
4 The Internal Tensions and Ethical Issues in RLxF


A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
III. ATTACKS ON DT-INTEGRATED AI ROBOTS
IV. DT-INTEGRATED ROBOTICS DESIGN CONSIDERATIONS AND DISCUSSION


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
II. UNDERSTANDING GENAI SECURITY


Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
4. AI-driven tax policy to reduce economic inequality: a thought experiment


A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
2 Why audit generative AI systems?
3 How to audit generative AI systems?
4 Governance audits
5 Model audits


Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
6. Lessons learned from AstraZeneca’s 2021 AI audit


Why should we ever automate moral decision making? / 2407.07671 / ISBN:https://doi.org/10.48550/arXiv.2407.07671 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Reasons for automated moral decision making


FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
Table 1


Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
C Experimental Details


PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
F Additional Figures


Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework


Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
2 Literature Review
6 Conclusion


Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
Introduction
A brief history of AI and generative AI
Applications of generative AI to real-world evidence (RWE):
Limitations of generative AI in HTA applications
Glossary
Appendices


Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
3 Giraffe and Acacia: Reciprocal Adaptations and Shaping
4 Generative AI and Humans: Risks and Mitigation
5 Meta Analysis: Limits of the Analogy


Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
Introduction


Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
3 Assurance of AI Systems for Specific Functions
4 Assurance for General-Purpose AI
5 Assurance and Alignment for AGI


Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
Abstract
2. Key Challenges of Artificial Data


RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background
III. Methodology
VII. Conclusion


Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models
5 Discussion: Grappling with the Scale and Interconnectedness of Foundation Models
Impact Statement


Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work


Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
4. Passive Deepfake Authentication Methods
5. Deepfakes Detection Method on Realistic Scenarios
6. Active Authentication


Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
4 ESG-AI framework


AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
4 Results


Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
3 Methods
5 Discussion
7 Research Ethics and Social Impact
B Additional Materials for Pilot Survey


Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
4. Model architecture and training parameters
5. Model Training


Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
1 Preliminary notes


Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
I. The Why and How Behind LLMs
II. The Difference Between Academic and Commercial Research
III. A Guide for Data in LLM Research
IV. The Path Ahead


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Data Sources
4 Data Preparation
5 Data Documentation and Release
6 Model Training
7 Environmental Impact
8 Model Evaluation


Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Generative AI
III. Language Modeling
IV. Challenges of Generative AI and LLMs


VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
3 Method


Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
V. Challenges and Risks


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
Introduction
I. AI and the Federal Arbitration ACt


CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Background and Related Works
3. Methodology
4. Experiment Results


Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
6 Fairness (F)
7 Privacy and Data Protection (P)
11 Truthfulness (TR)


Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
IV. Attack Methodology


Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions
Appendix


Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
4 Background


What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
What Empathic Capabilities Do AIs Need?


Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Trustworthy and Responsible AI Definition
4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications


A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
2 Background
4 Adapting General LLMs to the Biomedical Field
5 Discussion


Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. How GenAI Could Make a Difference
4. Risks and Caveats
5. Annoyances or Dealbreakers?


The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
Mapping ethical challenges in complexity science


AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
Introduction


DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
2 Prior Benchmarks
4 LLM Services (Infrastructure)
7 Limitations
10 Appendix


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
II. The Critics Are Killing the Baby


On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
2 A Creative Journey from Ada Lovelace to Foundation Models
3 Large Language Models and Boden’s Three Criteria
4 Easy and Hard Problems in Machine Creativity
5 Practical Implications


Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
Part I Modelling - Machine learning, computer vision and processing 1 Machine learning and computer vision for Earth observation
5 Physics-aware machine learning
Part III Communicating - Machine-user interaction, trustworthiness & ethics 6 User-centric Earth observation
7 Earth observation and society: the growing relevance of ethics


LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Hate Classifier Model


Why business adoption of quantum and AI technology must be ethical / 2312.10081 / ISBN:https://doi.org/10.48550/arXiv.2312.10081 / Published by ArXiv / on (web) Publishing site
Argument by regulatory relevance


Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Foundation Models
3 Foundation Models in Healthcare
4 Multi-Modal Data Fusion
5 Data Quantity
6 Data Annotation
7 Data Privacy


Ethical Artificial Intelligence Principles and Guidelines for the Governance and Utilization of Highly Advanced Large Language Models / 2401.10745 / ISBN:https://doi.org/10.48550/arXiv.2401.10745 / Published by ArXiv / on (web) Publishing site
Background


Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
1. Introduction


Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
Language models as human participants
Six fallacies that misinterpret language models
Using language models to simulate roles and model cognitive processes


Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
I. Introduction
IV. Findings and Resultant Themes
V. Discussion


How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
5 Open Challenges and Future Research Directions (RQ5)


Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
5 Discussion


Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
2 What organizations could document about AI systems
4 Results


GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work


Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
II. Views on Intelligence
IV. Human-Level AI and Challenges/Perspectives
VI. Conclusion


Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
Abstract
2. Literature Review
4. Framework Development
5. Analysis and Discussion


Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Aims & Objectives (RQ1)
6 Methodologies & Capabilities (RQ2)


Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
III. Current Platform Measures


Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
2 Inherent problems of AI related to medicine
4 AI safety issues related to large language models in medicine


Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
2 Related Work
3 Methods
B Service-ready Features and Identifiers


Enhancing transparency in AI-powered customer engagement / 2410.01809 / ISBN:https://doi.org/10.48550/arXiv.2410.01809 / Published by ArXiv / on (web) Publishing site
Explaining AI-Powered Decision Making in Customer Engagement


Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
IV. Results


Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
I. Introduction
VII. Evaluations and Experiments


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
5 Examining LLM's Adherence to Design Principles and the Steerability of Value Preferences


Application of AI in Credit Risk Scoring for Small Business Loans: A case study on how AI-based random forest model improves a Delphi model outcome in the case of Azerbaijani SMEs / 2410.05330 / ISBN:https://doi.org/10.48550/arXiv.2410.05330 / Published by ArXiv / on (web) Publishing site
Methodology
Discussion


Investigating Labeler Bias in Face Annotation for Machine Learning / 2301.09902 / ISBN:https://doi.org/10.48550/arXiv.2301.09902 / Published by ArXiv / on (web) Publishing site
Abstract
2. Related Work


From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
The multiple levels of AI impact
The emerging social impacts of ChatGPT


Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems / 2410.10284 / ISBN:https://doi.org/10.48550/arXiv.2410.10284 / Published by ArXiv / on (web) Publishing site
III. Research Methodology
IV. Challenges of AWS


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
A. Appendix


Study on the Helpfulness of Explainable Artificial Intelligence / 2410.11896 / ISBN:https://doi.org/10.48550/arXiv.2410.11896 / Published by ArXiv / on (web) Publishing site
2 Measuring Explainability
4 Survey Results


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
1 Introduction
6 Main results on evaluation set


Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
2 Ethics of Resisting LLM Inference
3 Threat Model
5 Experiments


Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
Refefences


Confrontation or Acceptance: Understanding Novice Visual Artists' Perception towards AI-assisted Art Creation / 2410.14925 / ISBN:https://doi.org/10.48550/arXiv.2410.14925 / Published by ArXiv / on (web) Publishing site
5 RQ1: Evolution of the Opinions Towards AI Tools
8 RQ4: Expectation and Confrontation Towards The Future


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
II. Background and Concepts
III. Jailbreak Attack Methods and Techniques
IV. Defense Mechanisms Against Jailbreak Attacks
V. Evaluation and Benchmarking
VI. Research Gaps and Future Directions
VII. Conclusion


Ethical AI in Retail: Consumer Privacy and Fairness / 2410.15369 / ISBN:https://doi.org/10.48550/arXiv.2410.15369 / Published by ArXiv / on (web) Publishing site
2.0 Literature Review


Redefining Finance: The Influence of Artificial Intelligence (AI) and Machine Learning (ML) / 2410.15951 / ISBN:https://doi.org/10.48550/arXiv.2410.15951 / Published by ArXiv / on (web) Publishing site
What Is AI & ML


Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
Introduction
Taxonomies of Harm Must be Vernacularized to be Operationalized
Limitations


Distribution of Responsibility During the Usage of AI-Based Exoskeletons for Upper Limb Rehabilitation / 2410.16887 / ISBN:https://doi.org/10.48550/arXiv.2410.16887 / Published by ArXiv / on (web) Publishing site
II. Case Studfy: AIBle
V. Technical Factors During the System Design


Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
4 Evaluation
8 Limitations


Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
Large Language Model Selection
Glossary


The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
II. Fundamentals of Diffusion Models and Detection Challenges
III. Detection Methods Based on Image Analysis
V. Datasets and Benchmarks
VIII. Research Gaps and Future Directions


TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
Appendices


The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
3 Methodology
4 Results


The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
3 Assessing the Current State of Self-Awareness in Artificial Intelligent Systems
4 Free Evolution, The Imperative and Intent
6 Conclusions


Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Trends in advanced AI safety and trustworthiness standardization
4 Conclusion


Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
3 Interactive-Reflective Dialogue Alignment (IRDA) System
4 Study Design & Methodology
5 Results: Study 1 - Multi-Agent Apple Farming
7 Discussion
Appendices


Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
Theoretical Framework
Discussion


Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Literature Review
3 Methodology
6 FutureDirections


Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
Classical Assessment Validation Theory and Responsible AI
The Evolution of Responsible AI for Assessment
Integrating Classical Validation Theory and Responsible AI


Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
2 Related Work


Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Results
5. Discussion


A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
4. Potential Risks and Ethical Challenges of XR and the Metaverse


I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background and Related Work
4 Findings
5 Discussion


A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background and Technology
III. From General to Medical-Specific LLMs
IV. Improving Algorithms for Med-LLMs
V. Applying Medical LLMs
VII. Future Directions


The doctor will polygraph you now: ethical concerns with AI for fact-checking patients / 2408.07896 / ISBN:https://doi.org/10.48550/arXiv.2408.07896 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Clinical, Technical, and Ethical Concerns
3. Methods
4. Results
5. Discussion


Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
Materials and methods
Discussion


Collaborative Participatory Research with LLM Agents in South Asia: An Empirically-Grounded Methodological Initiative and Agenda from Field Evidence in Sri Lanka / 2411.08294 / ISBN:https://doi.org/10.48550/arXiv.2411.08294 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Why South Asia Needs This Now
4 Field Work and Implementation Insights


Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Transformer Architecture
8 Ethical Issues
10 Limitations


Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Problem Statement: the Interface Dilemma
V. Multimodal Interaction
VI. Limitations, Challenges, and Future Directions for AI-Driven Interfaces


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Intrinsic Bias
4. Bias Evaluation
5. Bias Mitigation
6. Ethical Concerns and Legal Challenges


Framework for developing and evaluating ethical collaboration between expert and machine / 2411.10983 / ISBN:https://doi.org/10.48550/arXiv.2411.10983 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Method


Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / on (web) Publishing site
3 Experimental framework
Appendices


GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background


Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Foundations of P2VAD
III. P2VAD with non -Identifiable Elements
IV. Desensitized Intermediate Modalities P2VAD
V. Edge-Cloud Intelligencve Empowered P2VAD
VI. Evaluation Benchmarks and Metrics
VII. Discussion


Advancing Transformative Education: Generative AI as a Catalyst for Equity and Innovation / 2411.15971 / ISBN:https://doi.org/10.48550/arXiv.2411.15971 / Published by ArXiv / on (web) Publishing site
4 Findings and Discussion
6 Ethical Implications of Generative AI in Education
9 Future Work
10 Proposed Policy Recommendations


Good intentions, unintended consequences: exploring forecasting harms / 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
6 A Research agenda
Appendices


AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
2 Generative AI and ChatGPT


Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
1. Introduction


Ethics and Artificial Intelligence Adoption / 2412.00330 / ISBN:https://doi.org/10.48550/arXiv.2412.00330 / Published by ArXiv / on (web) Publishing site
II. Literature Review


Artificial Intelligence Policy Framework for Institutions / 2412.02834 / ISBN:https://doi.org/10.48550/arXiv.2412.02834 / Published by ArXiv / on (web) Publishing site
II. Context for AI
III. Key Considerations for AI Policy


Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
Introduction and Motivation
Core Ethical Challenges


Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. AI Text Generators (AITG)
III. Retrieval Augmented Generation (RAG)
IV. Tools and Methods for RAG


Large Language Models in Politics and Democracy: A Comprehensive Survey / 2412.04498 / ISBN:https://doi.org/10.48550/arXiv.2412.04498 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Understanding Large Language Models
3. LLM Applications in Politics


From Principles to Practice: A Deep Dive into AI Ethics and Regulations / 2412.04683 / ISBN:https://doi.org/10.48550/arXiv.2412.04683 / Published by ArXiv / on (web) Publishing site
II AI Practice and Contextual Integrity
IV Integrative AI Ethics


Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
II AI Practice and Contextual Integrity
IV Integrative AI Ethics


Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
3 Results


Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
2 Preliminaries
4 Classical Political Science Functions and Modern Transformations
5 Technical Foundations for LLM Applications in Political Science


Trustworthy artificial intelligence in the energy sector: Landscape analysis and evaluation framework / 2412.07782 / ISBN:https://doi.org/10.48550/arXiv.2412.07782 / Published by ArXiv / on (web) Publishing site
III. E-TAI – Methodological Framework for Trustworthy AI in the Energy Domain


Towards Foundation-model-based Multiagent System to Accelerate AI for Social Impact / 2412.07880 / ISBN:https://doi.org/10.48550/arXiv.2412.07880 / Published by ArXiv / on (web) Publishing site
3 Formulating the Problem
4 Designing Solution Methods
5 Testing and Deployment


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
Appendices


Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
Discussion


CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
Introduction
Discussion
Conclusion


Reviewing Intelligent Cinematography: AI research for camera-based video production / 2405.05039 / ISBN:https://doi.org/10.48550/arXiv.2405.05039 / Published by ArXiv / on (web) Publishing site
3 Intelligent Cinematography in Production
Appendices


Shaping AI's Impact on Billions of Lives / 2412.02730 / ISBN:https://doi.org/10.48550/arXiv.2412.02730 / Published by ArXiv / on (web) Publishing site
Introduction
I. Putting Pragmatic AI in Context
II. Demystifying the Potential Impact on AI


Intelligent Electric Power Steering: Artificial Intelligence Integration Enhances Vehicle Safety and Performance / 2412.08133 / ISBN:https://doi.org/10.48550/arXiv.2412.08133 / Published by ArXiv / on (web) Publishing site
III. AI Integration in EPS: Safety and Performance Enhancement


AI Ethics in Smart Homes: Progress, User Requirements and Challenges / 2412.09813 / ISBN:https://doi.org/10.48550/arXiv.2412.09813 / Published by ArXiv / on (web) Publishing site
5 AI Ethics from Technology's Perspective


Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
Research Phases and AI Tools
Discussion


On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
III. Results


Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
4 Clio for safety


User-Generated Content and Editors in Games: A Comprehensive Survey / 2412.13743 / ISBN:https://doi.org/10.48550/arXiv.2412.13743 / Published by ArXiv / on (web) Publishing site
III. Categories of User-Generated Content


Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets / 2412.14062 / ISBN:https://doi.org/10.48550/arXiv.2412.14062 / Published by ArXiv / on (web) Publishing site
2.0 Trust in Automation


Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Theoretical Perspectives


Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
VIII. Future Direction and Discussion


Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
2 Literature Review
5 Discussion


Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Value Misalignment
4 Robustness to Attack
5 Misuse
7 Agent Safety
8 Interpretability for LLM Safety
9 Technology Roadmaps / Strategies to LLM Safety in Practice
10 Governance


Autonomous Alignment with Human Value on Altruism through Considerate Self-imagination and Theory of Mind / 2501.00320 / ISBN:https://doi.org/10.48550/arXiv.2501.00320 / Published by ArXiv / on (web) Publishing site
2 Results
3 Discussion


Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
Introduction
V. Discussion and Synthesis


INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
4 Method
5 Experiments & Results


Datasheets for Healthcare AI: A Framework for Transparency and Bias Mitigation / 2501.05617 / ISBN:https://doi.org/10.48550/arXiv.2501.05617 / Published by ArXiv / on (web) Publishing site
2. Literature Review


Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / on (web) Publishing site
4 Phase 2: Focus Groups
Appendices


Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Learning Morality in Machines
3. Designing AI Agents based on Moral Principles
6. Conclusion


A Blockchain-Enabled Approach to Cross-Border Compliance and Trust / 2501.09182 / ISBN:https://doi.org/10.48550/arXiv.2501.09182 / Published by ArXiv / on (web) Publishing site
I. Introduction


Towards A Litmus Test for Common Sense / 2501.09913 / ISBN:https://doi.org/10.48550/arXiv.2501.09913 / Published by ArXiv / on (web) Publishing site
1 Introduction and Motivation
7 Mathematical Formulation for LLMs and AI


Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv.2501.10453 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Results and Discussion
Supplementary


Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity / 2501.10467 / ISBN:https://doi.org/10.48550/arXiv.2501.10467 / Published by ArXiv / on (web) Publishing site
IV. Ethical Considerations in AI Deployment for Cybersecurity


Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv.2501.10484 / Published by ArXiv / on (web) Publishing site
Introduction
Related Works


AI Toolkit: Libraries and Essays for Exploring the Technology and Ethics of AI / 2501.10576 / ISBN:https://doi.org/10.48550/arXiv.2501.10576 / Published by ArXiv / on (web) Publishing site
2 Description of AITK
5 Usability Study


Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv.2501.10685 / Published by ArXiv / on (web) Publishing site
1. Introduction
2- Technological Foundations
5- Customer Communication and Assistance
6- Campaign Optimization and Management
10- Case Studies and Real-world Applications


Development of Application-Specific Large Language Models to Facilitate Research Ethics Review / 2501.10741 / ISBN:https://doi.org/10.48550/arXiv.2501.10741 / Published by ArXiv / on (web) Publishing site
III. Generative AI for IRB review
IV. Application-Specific IRB LLMs
V. Discussion: Potential Benefits, Risks, and Replies


Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond / 2501.11457 / ISBN:https://doi.org/10.48550/arXiv.2501.11457 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background
4 Results
5 Discussion


Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) / 2501.11705 / ISBN:https://doi.org/10.48550/arXiv.2501.11705 / Published by ArXiv / on (web) Publishing site
Ethical Issues in Context


A Critical Field Guide for Working with Machine Learning Datasets / 2501.15491 / ISBN:https://doi.org/10.48550/arXiv.2501.15491 / Published by ArXiv / on (web) Publishing site
1. Introduction to Machine Learning Datasets
5. Transforming Datasets
Credits, Acknowledgments


The Third Moment of AI Ethics: Developing Relatable and Contextualized Tools / 2501.16954 / ISBN:https://doi.org/10.48550/arXiv.2501.16954 / Published by ArXiv / on (web) Publishing site
Appendices


A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent / 2501.18038 / ISBN:https://doi.org/10.48550/arXiv.2501.18038 / Published by ArXiv / on (web) Publishing site
5. Mapping overlaps between TELUS innovation and acceleration ethics in the area of privacy


Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
4 Findings
5 Discussion


Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv.2501.18632 / Published by ArXiv / on (web) Publishing site
Model Guardrail Enhancemen


Constructing AI ethics narratives based on real-world data: Human-AI collaboration in data-driven visual storytelling / 2502.00637 / ISBN:https://doi.org/10.48550/arXiv.2502.00637 / Published by ArXiv / on (web) Publishing site
2 Related Work
5 Discussion


Meursault as a Data Point / 2502.01364 / ISBN:https://doi.org/10.48550/arXiv.2502.01364 / Published by ArXiv / on (web) Publishing site
II. Literature Review
IV. Methodology
V. Results


Ethical Considerations for the Military Use of Artificial Intelligence in Visual Reconnaissance / 2502.03376 / ISBN:https://doi.org/10.48550/arXiv.2502.03376 / Published by ArXiv / on (web) Publishing site
2 Principles of Ethical AI
3 Use Case 1 - Decision Support for Maritime Surveillance
5 Use Case 3 - Land-based Reconnaissance in Inhabited Area


FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv.2502.03826 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Related Work


Cognitive AI framework: advances in the simulation of human thought / 2502.04259 / ISBN:https://doi.org/10.48550/arXiv.2502.04259 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction to Cognitive AI
2. Core Components of the Framework
3. Data Flow and Process Logic
4. Cognitive Comparative Analysis
7. Conclusion and Future Prospects


Open Foundation Models in Healthcare: Challenges, Paradoxes, and Opportunities with GenAI Driven Personalized Prescription / 2502.04356 / ISBN:https://doi.org/10.48550/arXiv.2502.04356 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background
III. State-of-the-Art in Open Healthcare LLMs and AIFMs


Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
2 Vision Foundation Model Safety
3 Large Language Model Safety
4 Vision-Language Pre-Training Model Safety
5 Vision-Language Model Safety
6 Diffusion Model Safety
8 Open Challenges


The Odyssey of the Fittest: Can Agents Survive and Still Be Good? / 2502.05442 / ISBN:https://doi.org/10.48550/arXiv.2502.05442 / Published by ArXiv / on (web) Publishing site
Introduction
Method
Results
Discussion


Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv.2502.06059 / Published by ArXiv / on (web) Publishing site
7 Open Challenge


Integrating Generative Artificial Intelligence in ADRD: A Framework for Streamlining Diagnosis and Care in Neurodegenerative Diseases / 2502.06842 / ISBN:https://doi.org/10.48550/arXiv.2502.06842 / Published by ArXiv / on (web) Publishing site
High Quality Data Collection


Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv.2502.07254 / Published by ArXiv / on (web) Publishing site
Introduction


Agentic AI: Expanding the Algorithmic Frontier of Creative Problem Solving / 2502.00289 / ISBN:https://doi.org/10.48550/arXiv.2502.00289 / Published by ArXiv / on (web) Publishing site
Two–sided Algorithmic Markets


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
Appendices


From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine / 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
4 Language models in medicine
5 Multimodal language models in medicine
6 Evaluation metrics for generative AI in medicine


Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
Introduction
Section 2: Distinctive Characteristics of AI and Implications for Relational Norms
Section 3: Considerations and Future Directions for AI Governance and Design


AI and the Transformation of Accountability and Discretion in Urban Governance / 2502.13101 / ISBN:https://doi.org/10.48550/arXiv.2502.13101 / Published by ArXiv / on (web) Publishing site
4. Guiding Principles for AI-Enhanced Governance in Urban Policy


Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Failure Modes
3 Risk Factors
4 Implications
Appendices


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
3 Guidelines of Trustworthy Generative Foundation Models
6 Benchmarking Large Language Models
7 Benchmarking Vision-Language Models
9 Trustworthiness in Downstream Applications
10 Further Discussion


Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background and Challenges
III. ML/DL Applications in Surgical Tool Recognition
IV. ML/DL Applications in Surgical Workflow Analysis
V. ML/DL Applications in Surgical Training and Simulation


Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives / 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / on (web) Publishing site
Abstract
2 Background and Taxonomy
4 Enviromental Impact
5 Policymakers


Why do we do this?: Moral Stress and the Affective Experience of Ethics in Practice / 2502.18395 / ISBN:https://doi.org/10.48550/arXiv.2502.18395 / Published by ArXiv / on (web) Publishing site
6 Discussion


Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Methodology
3. Results
4. Discussion
5. Conclusion


Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning / 2502.19798 / ISBN:https://doi.org/10.48550/arXiv.2502.19798 / Published by ArXiv / on (web) Publishing site
Experiments and Results


An LLM-based Delphi Study to Predict GenAI Evolution / 2502.21092 / ISBN:https://doi.org/10.48550/arXiv.2502.21092 / Published by ArXiv / on (web) Publishing site
3 Results
4 Discussion
5 Conclusions


Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones / 2502.21248 / ISBN:https://doi.org/10.48550/arXiv.2502.21248 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Defining Pre-Mortem AI Clones and Generative Ghosts
4 Legal and Societal Implications of Pre-Mortem AI Clones


Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions / 2503.00940 / ISBN:https://doi.org/10.48550/arXiv.2503.00940 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Discussion


Digital Dybbuks and Virtual Golems: AI, Memory, and the Ethics of Holocaust Testimony / 2503.01369 / ISBN:https://doi.org/10.48550/arXiv.2503.01369 / Published by ArXiv / on (web) Publishing site
The permissibility of digital duplicates in Holocaust remembrance and education


Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
II. Multi-Modal Visual Text Models
III. Core Concepts of Visual Language Modeling
IV. VLM Benchmarking and Evaluations
V. Challenges and Limitations
VI. Opportunities and Future Directions


Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
2 Background, History and Resources
3 Personality Computing Systems


AI Automatons: AI Systems Intended to Imitate Humans / 2503.02250 / ISBN:https://doi.org/10.48550/arXiv.2503.02250 / Published by ArXiv / on (web) Publishing site
3 Conceptual Framework for AI Automatons


Compliance of AI Systems / 2503.05571 / ISBN:https://doi.org/10.48550/arXiv.2503.05571 / Published by ArXiv / on (web) Publishing site
II. Steps to Ensure Compliance in Applying AI
IV. Platform-Based Approach for Trustworthy and Compliant AI


Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China / 2503.05773 / ISBN:https://doi.org/10.48550/arXiv.2503.05773 / Published by ArXiv / on (web) Publishing site
5 Case Studies


Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
2 LLM Hallucinations in Medicine
3 Causes of Hallucinations
5 Mitigation Strategies
6 Experiments on Medical Hallucination Benchmark
Appendices


Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance / 2503.06411 / ISBN:https://doi.org/10.48550/arXiv.2503.06411 / Published by ArXiv / on (web) Publishing site
2 Key Themes from Weapons of Math Destruc- tion


Generative AI in Transportation Planning: A Survey / 2503.07158 / ISBN:https://doi.org/10.48550/arXiv.2503.07158 / Published by ArXiv / on (web) Publishing site
2 Background
3 Classical Transportation Planning Functions and Modern Transformations
4 Technical Foundations for Generative AI Applications in Transportation Planning
5 Future Directions & Challenges


Mapping out AI Functions in Intelligent Disaster (Mis)Management and AI-Caused Disasters / 2502.16644 / ISBN:https://doi.org/10.48550/arXiv.2502.16644 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Intelligent Disaster Mismanagement (IDMM)


AI Governance InternationaL Evaluation Index (AGILE Index) / 2502.15859 / ISBN:https://doi.org/10.48550/arXiv.2502.15859 / Published by ArXiv / on (web) Publishing site
3. Analysis and Observations


Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework / 2503.09969 / ISBN:https://doi.org/10.48550/arXiv.2503.09969 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Results
4 Methods


MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
7 Discussion


LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions / 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Methodology


Synthetic Data for Robust AI Model Development in Regulated Enterprises / 2503.12353 / ISBN:https://doi.org/10.48550/arXiv.2503.12353 / Published by ArXiv / on (web) Publishing site
Synthetic Data Generation for Enterprise AI Development
Case Studies in Regulated Industries
Challenges and Limitations
Future Directions


Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy / 2503.14539 / ISBN:https://doi.org/10.48550/arXiv.2503.14539 / Published by ArXiv / on (web) Publishing site
Introduction


Regulating Ai In Financial Services: Legal Frameworks And Compliance Challenges / 2503.14541 / ISBN:https://doi.org/10.48550/arXiv.2503.14541 / Published by ArXiv / on (web) Publishing site
Article


A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
Literature Review
Step-Around Prompting: A Research Tool and Potential Threat
Ethics of Step-Around Prompting
Discussion and Recommendations


Advancing Human-Machine Teaming: Concepts, Challenges, and Applications / 2503.16518 / ISBN:https://doi.org/10.48550/arXiv.2503.16518 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Empirical Studies to Promote Team Performance
6 Concepts, Challenges, and Applications of Human-Machine Teaming


Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI / 2503.16558 / ISBN:https://doi.org/10.48550/arXiv.2503.16558 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
III. Case Study: PBL for Biomedical AI Education
IV. Challenges and Opportunities


HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
3 Standards and Guidelines


AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use / 2503.20099 / ISBN:https://doi.org/10.48550/arXiv.2503.20099 / Published by ArXiv / on (web) Publishing site
Introduction
Literature Review


A Systematic Decade Review of Trip Route Planning with Travel Time Estimation based on User Preferences and Behavior / 2503.23486 / ISBN:https://doi.org/10.48550/arXiv.2503.23486 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Recent Works
IV. Limitations and Future Trends


Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia / 2504.00652 / ISBN:https://doi.org/10.48550/arXiv.2504.00652 / Published by ArXiv / on (web) Publishing site
I. Introduction


Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice / 2504.00797 / ISBN:https://doi.org/10.48550/arXiv.2504.00797 / Published by ArXiv / on (web) Publishing site
4 Transversal Issues in AI Ethics and Sustainability
5 Establishing Best Practices for AI Ethics and Sustainability


Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents / 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
2. Related Work


Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
Abstract
2 Legal Background
3 From Legal Frameworks to Technological Solutions
4 Legal cases as Use Cases for Attribution Methods
5 Discussion


Systematic Literature Review: Explainable AI Definitions and Challenges in Education / 2504.02910 / ISBN:https://doi.org/10.48550/arXiv.2504.02910 / Published by ArXiv / on (web) Publishing site
Discussion


Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini / 2504.06436 / ISBN:https://doi.org/10.48550/arXiv.2504.06436 / Published by ArXiv / on (web) Publishing site
2. Artificial Intelligence
4. Results
5. Conclusion


We Are All Creators: Generative AI, Collective Knowledge, and the Path Towards Human-AI Synergy / 2504.07936 / ISBN:https://doi.org/10.48550/arXiv.2504.07936 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 The Connectionist Nature of Generative AI: Beyond the Black Box
3 AI Creativity as Alternative Cognition
4 Navigating the Copyright Labyrinth: Collective Input, Individual Output?
6 Towards Human-AI Synergy: A Pragmatic Path Forward


Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
2 An overview of the generative AI evaluation landscape


AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias / 2407.19655 / ISBN:https://doi.org/10.48550/arXiv.2407.19655 / Published by ArXiv / on (web) Publishing site
Introduction
2 Fairness Concerns in Healthcare
3 Addressing and Mitigating Unfairness in AI
4 Research gaps and Future directions


A Comprehensive Survey on Integrating Large Language Models with Knowledge-Based Methods / 2501.13947 / ISBN:https://doi.org/10.48550/arXiv.2501.13947 / Published by ArXiv / on (web) Publishing site
2. Overview of LLMs
3. Challenges in implementing LLMs for real-world scenarios
4. Solutions to address LLM challenges
5. Integrating LLMs with knowledge bases


An Empirical Study on Decision-Making Aspects in Responsible Software Engineering for AI / 2501.15691 / ISBN:https://doi.org/10.48550/arXiv.2501.15691 / Published by ArXiv / on (web) Publishing site
IV. Results and Analysis


Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation / 2502.05151 / ISBN:https://doi.org/10.48550/arXiv.2502.05151 / Published by ArXiv / on (web) Publishing site
3 AI Support for Individual Topics and Tasks
Appendix


>Publishing site
Who Evaluates and How?
Other Considerations
Case Studies


Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
3. Confirmation Bias in Generative AI Chatbots
4. Mechanisms of Confirmation Bias in Chatbot Architectures
7. Future Research Directions


Framework, Standards, Applications and Best practices of Responsible AI : A Comprehensive Survey / 2504.13979 / ISBN:https://doi.org/10.48550/arXiv.2504.13979 / Published by ArXiv / on (web) Publishing site
5. AI Standards and Regulations
6. Applications of RAI
7. RAI in Technology


Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions / 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / on (web) Publishing site
Appendix


Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
3. Challenges of current AI methods in education: The Elephant in the room
4. Hybrid human-AI methods for responsible AI for education
5. Conclusion


Approaches to Responsible Governance of GenAI in Organizations / 2504.17044 / ISBN:https://doi.org/10.48550/arXiv.2504.17044 / Published by ArXiv / on (web) Publishing site
III. Identified Concerns and Risks
V. Implementation Plan: Toward Actionable GenAI Governance


Auditing the Ethical Logic of Generative AI Models / 2504.17544 / ISBN:https://doi.org/10.48550/arXiv.2504.17544 / Published by ArXiv / on (web) Publishing site
Seven Contemporary LLMs


AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
4 Result


A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption / 2504.19179 / ISBN:https://doi.org/10.48550/arXiv.2504.19179 / Published by ArXiv / on (web) Publishing site
3. An overview of the AI ecosystem in the medical field: processes, data, and stakeholders
4. Design framework for medical AI systems
5. Tradeoffs between TAI principles and requirements


Ethical Challenges of Using Artificial Intelligence in Judiciary / 2504.19284 / ISBN:https://doi.org/10.48550/arXiv.2504.19284 / Published by ArXiv / on (web) Publishing site
III. Ethical Challenges of Using AI in Judiciary
V. Conclusion


The EU AI Act in Development Practice: A Pro-justice Approach / 2504.20075 / ISBN:https://doi.org/10.48550/arXiv.2504.20075 / Published by ArXiv / on (web) Publishing site
4. A Pro-Justice Approach to the Act in Practice


Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness


AI Awareness / 2504.20084 / ISBN:https://doi.org/10.48550/arXiv.2504.20084 / Published by ArXiv / on (web) Publishing site
3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness


TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language Models / 2504.20605 / ISBN:https://doi.org/10.48550/arXiv.2504.20605 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 LLM Evaluation and Comparison with Related Work
5 Discussion and threats to validity


Federated learning, ethics, and the double black box problem in medical AI / 2504.20656 / ISBN:https://doi.org/10.48550/arXiv.2504.20656 / Published by ArXiv / on (web) Publishing site
2 What is federated learning?
3 Federated learning in healthcare
4 Limitations
5 The double black box problem
6 Further ethical concerns
7 Conclusion


Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation / 2504.21574 / ISBN:https://doi.org/10.48550/arXiv.2504.21574 / Published by ArXiv / on (web) Publishing site
3. Emerging Cybersecurity Threats to Financial Institution
7. Recommendations for Secure AI Adoption


From Texts to Shields: Convergence of Large Language Models and Cybersecurity / 2505.00841 / ISBN:https://doi.org/10.48550/arXiv.2505.00841 / Published by ArXiv / on (web) Publishing site
3 LLM Agent and Applications
4 Socio-Technical Aspects of LLM and Security


LLM Ethics Benchmark: A Three-Dimensional Assessment System for Evaluating Moral Reasoning in Large Language Models / 2505.00853 / ISBN:https://doi.org/10.48550/arXiv.2505.00853 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Proposed Methodology for Testing LLM Moral Reasoning
6 Concluding Remarks and Future Directions


Securing the Future of IVR: AI-Driven Innovation with Agile Security, Data Regulation, and Ethical AI Integration / 2505.01514 / ISBN:https://doi.org/10.48550/arXiv.2505.01514 / Published by ArXiv / on (web) Publishing site
III. Widget-Based Platforms: Improved Access, New Risks
IV. The Role of AI in Modern IVR Systems


Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
IV. Applications and Approaches
V. Technological Strengths
VI. Datasets for Emotion Management and Sentiment Analysis
VIII. Challenges and Weaknesses
IX. Ethical and Societal Considerations


Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
4 Topical and Toxic Prompt (TTP)
5 HarmFormer
6 Results
7 Conclusion
8 Limitations & Future Work


GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions / 2505.05523 / ISBN:https://doi.org/10.48550/arXiv.2505.05523 / Published by ArXiv / on (web) Publishing site
2. Background
3. Methodology
4. Findings
5. Ethics, Opportunities and Future Directions


Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
Appendices


AI and Generative AI Transforming Disaster Management: A Survey of Damage Assessment and Response Techniques / 2505.08202 / ISBN:https://doi.org/10.48550/arXiv.2505.08202 / Published by ArXiv / on (web) Publishing site
II Domain-Specific Literature Review


Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / on (web) Publishing site
Abstract
I Introduction
II Background
IV Persuasive Procedures in Generative AI


WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models / 2505.09595 / ISBN:https://doi.org/10.48550/arXiv.2505.09595 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Methodology and System Design
6 Discussion and Potential Limitations


Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
3 Methodology


AI LEGO: Scaffolding Cross-Functional Collaboration in Industrial Responsible AI Practices during Early Design Stages / 2505.10300 / ISBN:https://doi.org/10.48550/arXiv.2505.10300 / Published by ArXiv / on (web) Publishing site
4 AI LEGO
Appendices


Let's have a chat with the EU AI Act / 2505.11946 / ISBN:https://doi.org/10.48550/arXiv.2505.11946 / Published by ArXiv / on (web) Publishing site
III System Design and Architecture


Sentience Quest: Towards Embodied, Emotionally Adaptive, Self-Evolving, Ethically Aligned Artificial General Intelligence / 2505.12229 / ISBN:https://doi.org/10.48550/arXiv.2505.12229 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Preliminary Results and Evaluation


Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks / 2505.13565 / ISBN:https://doi.org/10.48550/arXiv.2505.13565 / Published by ArXiv / on (web) Publishing site
3 Positive taxonomy: Positive impact of AI on democracy
6 Reflections, Contextualization, and Synthesis


Kaleidoscope Gallery: Exploring Ethics and Generative AI Through Art / 2505.14758 / ISBN:https://doi.org/10.48550/arXiv.2505.14758 / Published by ArXiv / on (web) Publishing site
2 Background
4 Results


AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals / 2505.15365 / ISBN:https://doi.org/10.48550/arXiv.2505.15365 / Published by ArXiv / on (web) Publishing site
2 Theoretical Background
3 Methodology
4 Results
5 Discussion


Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies / 2505.15851 / ISBN:https://doi.org/10.48550/arXiv.2505.15851 / Published by ArXiv / on (web) Publishing site
3 Pilot Studies


Advancing the Scientific Method with Large Language Models: From Hypothesis to Discovery / 2505.16477 / ISBN:https://doi.org/10.48550/arXiv.2505.16477 / Published by ArXiv / on (web) Publishing site
Current use of LLMs – From Specialised Scientific Copilots to LLM- assisted Scientific Discoveries
Toward Large Language Models as Creative Engines for Fundamental Science
Challenges and Opportunities
Conclusions


Cultural Value Alignment in Large Language Models: A Prompt-based Analysis of Schwartz Values in Gemini, ChatGPT, and DeepSeek / 2505.17112 / ISBN:https://doi.org/10.48550/arXiv.2505.17112 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Methods
Discussion


SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use / 2505.17332 / ISBN:https://doi.org/10.48550/arXiv.2505.17332 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Experiments


TEDI: Trustworthy and Ethical Dataset Indicators to Analyze and Compare Dataset Documentation / 2505.17841 / ISBN:https://doi.org/10.48550/arXiv.2505.17841 / Published by ArXiv / on (web) Publishing site
4 Impact of Data Sourcing on Trustworthy and Ethical Indicators
Appendix


Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods / 2505.17870 / ISBN:https://doi.org/10.48550/arXiv.2505.17870 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Conceptual Framework: Model Immunization via Quarantined Falsehoods
3 Discussion and Limitations
Appendix


AI Literacy for Legal AI Systems: A practical approach / 2505.18006 / ISBN:https://doi.org/10.48550/arXiv.2505.18006 / Published by ArXiv / on (web) Publishing site
5. Legal AI Systems Risk Assessment


Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI / 2505.20304 / ISBN:https://doi.org/10.48550/arXiv.2505.20304 / Published by ArXiv / on (web) Publishing site
3 Operationalizing Ethical Governance: The Three-Stage LoBOX Framework Pathway for Managing Opacity
4 Application scenarios


Making Sense of the Unsensible: Reflection, Survey, and Challenges for XAI in Large Language Models Toward Human-Centered AI / 2505.20305 / ISBN:https://doi.org/10.48550/arXiv.2505.20305 / Published by ArXiv / on (web) Publishing site
2 Beyond Transparency: Why Explainability Is Essential for LLMs?
7 Designing Actionable and Governable XAI: Challenges and Research Frontiers


Can we Debias Social Stereotypes in AI-Generated Images? Examining Text-to-Image Outputs and User Perceptions / 2505.20692 / ISBN:https://doi.org/10.48550/arXiv.2505.20692 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Results


Are Language Models Consequentialist or Deontological Moral Reasoners? / 2505.21479 / ISBN:https://doi.org/10.48550/arXiv.2505.21479 / Published by ArXiv / on (web) Publishing site
2 Problem Setup
6 Conclusion


Human-Centered Human-AI Collaboration (HCHAC) / 2505.22477 / ISBN:https://doi.org/10.48550/arXiv.2505.22477 / Published by ArXiv / on (web) Publishing site
2. An Overview of Human-AI Collaboration
5. Human-Centered Human-AI Collaboration


Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side / 2505.23733 / ISBN:https://doi.org/10.48550/arXiv.2505.23733 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Hypothesis Development
6 Discussion


Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking / 2505.23930 / ISBN:https://doi.org/10.48550/arXiv.2505.23930 / Published by ArXiv / on (web) Publishing site
Results
Discussion


Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / on (web) Publishing site
5 Discussion


Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products / 2506.00080 / ISBN:https://doi.org/10.48550/arXiv.2506.00080 / Published by ArXiv / on (web) Publishing site
Appendix


Feeling Guilty Being a c(ai)borg: Navigating the Tensions Between Guilt and Empowerment in AI Use / 2506.00094 / ISBN:https://doi.org/10.48550/arXiv.2506.00094 / Published by ArXiv / on (web) Publishing site
6. Skills and Implicatons for the Future


Ethical AI: Towards Defining a Collective Evaluation Framework / 2506.00233 / ISBN:https://doi.org/10.48550/arXiv.2506.00233 / Published by ArXiv / on (web) Publishing site
III Literature Review


Wide Reflective Equilibrium in LLM Alignment: Bridging Moral Epistemology and AI Safety / 2506.00415 / ISBN:https://doi.org/10.48550/arXiv.2506.00415 / Published by ArXiv / on (web) Publishing site
4. The Landscape of LLM Alignment: Methods and Challenges
5. Wide Reflective Equilibrium as the Descriptive Key to LLM Alignment
6. Normativity and the Limits of the Analogy
7. Operationalizing MWRE for LLM Alignment: Pathways, Pitfalls, and Technical Mechanisms
8. Future Research Directions and Broader Implications


DeepSeek in Healthcare: A Survey of Capabilities, Risks, and Clinical Applications of Open-Source Large Language Models / 2506.01257 / ISBN:https://doi.org/10.48550/arXiv.2506.01257 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Training and Architecture
Clinical Applications
Strenghts and Limitations of DeepSeek-R1


Machine vs Machine: Using AI to Tackle Generative AI Threats in Assessment / 2506.02046 / ISBN:https://doi.org/10.48550/arXiv.2506.02046 / Published by ArXiv / on (web) Publishing site
3. The Eight Elements of Static Analysis: Theoretical Justification


Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation / 2506.02992 / ISBN:https://doi.org/10.48550/arXiv.2506.02992 / Published by ArXiv / on (web) Publishing site
1 Introduction


HADA: Human-AI Agent Decision Alignment Architecture / 2506.04253 / ISBN:https://doi.org/10.48550/arXiv.2506.04253 / Published by ArXiv / on (web) Publishing site
3 Design & Development


Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs / 2506.05887 / ISBN:https://doi.org/10.48550/arXiv.2506.05887 / Published by ArXiv / on (web) Publishing site
3 A Multilevel Framework for Audience-Aware Explainability
4 Case Studies: Applying the Multilevel Framework with LLMs


Whole-Person Education for AI Engineers / 2506.09185 / ISBN:https://doi.org/10.48550/arXiv.2506.09185 / Published by ArXiv / on (web) Publishing site
II Literature Review
VII Appendix: Extracts from Participant Reflections


Where's the Line? A Classroom Activity on Ethical and Constructive Use of Generative AI in Physics / 2506.00229 / ISBN:https://doi.org/10.48550/arXiv.2506.00229 / Published by ArXiv / on (web) Publishing site
Conclusion: Co-Creating Integrity in the Age of AI