_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: metric


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: metric

Bibliography items where occurs: 433
The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
Report highlights
Chapter 2 Technical Performance
Chapter 3 Technical AI Ethics
Appendix


The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
5 Evaluation of Ethical Principle Implementations
6 Gap Mitigation
8 Conclusion


On the Current and Emerging Challenges of Developing Fair and Ethical AI Solutions in Financial Services / 2111.01306 / ISBN:https://doi.org/10.48550/arXiv.2111.01306 / Published by ArXiv / on (web) Publishing site
3 Practical Challengesof Ethical AI


What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
Appendix A supplementary material


Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
II. Underlying Aspects


QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
3 Evaluation
4 Conclusion


The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
4. Ethical Implications of AI Value Chains


From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
What is Generative Artificial Intelligence?
GREAT PLEA Ethical Principles for Generative AI in Healthcare


Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
2 Background


Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
3 Taxonomy of ethical principles


A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor


Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Targeted data augmentation
4 Experiments
5 Conclusions


Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming / 2308.11649 / ISBN:https://doi.org/10.48550/arXiv.2308.11649 / Published by ArXiv / on (web) Publishing site
3 Emergence of Creative AI Tools and Game-Based Methodologies


Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work on Data Excellence
3 Reliability and Reproducibility Metrics for Responsible Data Collection
5 Results
6 Discussion
7 Conclusions
A Agreement Analysis
D Stability analysis
E Replicability similarity analysis


Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
VI. Solution architecture for privacy-aware and trustworthy conversational AI


The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
7 Violet teaming to address dual-use risks of AI in biotechnology
10 Supplemental & additional details


Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
4 Experiment


The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
3 Benefits of AI use in the finance sector


Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
3 Bias and fairness


Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
Contents
Part 2 Art Data and Human–Machine Interaction in Art Creation
Part 2 - 1 Biometric Signal Sensing Technologies and Emotion Data
Part 2 - 2 Motion Caputer Technologies and Motion Data
Part 2 - 3 Photogrammetry / Volumetric Capture
Part 2 - 5 Immersive Visualisation: Machine to Human Manifestations
Part 3 - 2 Machine Artist Models
Part 3 - 4 Demonstration of the Proposed Framework
Part 5 Ethical AI and Machine Artist


FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
2. Fairness - For Equitable AI in Medical Imaging
3. Universality - For Standardised AI in Medical Imaging
4. Traceability - For Transparent and Dynamic AI in Medical Imaging
5. Usability - For Effective and Beneficial AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging
9. Discussion and Conclusion


The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
4 Experiments
Cambridge Law Corpus: Datasheet


EALM: Introducing Multidimensional Ethical Alignment in Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Dataset Construction
5 Experiments
Appendix


Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
II. AI-Robotics Systems Architecture


If our aim is to build morality into an artificial agent, how might we begin to go about doing so? / 2310.08295 / ISBN:https://doi.org/10.48550/arXiv.2310.08295 / Published by ArXiv / on (web) Publishing site
4 AI Governance Principles


Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
4 Taxonomy of AI Privacy Risks


ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
Theoretical Impact of LLMs on Information Operations
Mathematical Foundations
Looking Forward: ClausewitzGPT


A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
4. Implementing the Practical Use of Ethical AI Applications


A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
4. Usage and data for healthcare LLM
5. Improving fairness, accountability, transparency, and ethics


STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
2 STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models
3 The applications of STREAM


Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
4 Scientific Expertise, Social Media and Regulatory Capture


Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Systematic Review and Scientometric Analysis
5. Ethical Issues of AI and Robotics in AEC Industry


Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
Background and Related Work


Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
4. Technical Risks


The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
2. AI Ethics
4. A Holistic Framework


An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
2 Datasets and Methods


A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
3 Ethical datasets and algorithm development guidelines
4 Towards solving key ethical challenges in Medical AI
5 Ethical guidelines for medical AI model deployment


Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
5 Product Patterns


FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
FUTURE-AI GUIDELINE


Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
4 Reinforcement Learning with Good-for-Humanity Preference Models


The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Findings
5 Discussion


Systematic AI Approach for AGI: Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
6 System Insights from the Brain


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen


AI for Open Science: A Multi-Agent Perspective for Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
4 Optimizing an Openness Metric in AI for Science
6 Conclusion and Future Work


Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
Appendix A Evaluating Current Practices for Human-Participants Research


Towards Effective Paraphrasing for Information Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
4 Evaluation


Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Overview of Kantian Deontology
3 Measuring Fairness Metrics
4 Deontological AI Alignment
5 Aligning with Deontological Principles: Use Cases
6 Conclusion


Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
4 Applications of ChatGPT in real-world scenarios


Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
IV. Mitigation strategies for bias in AI
VI. Mitigation strategies for fairness in AI


Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
3 Method
5 Discussion


She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
1 Introduction


How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
4 Experiments


Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Chatbots Background and Scope of Research
3. Chatbot approaches overview: Taxonomy of existing methods
6. Open chanllenges
8. Conclusion


Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
4 Findings


Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
2 Proposed Process


Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
2 Background & Related Work
3 Methods
4 Findings
B Methodology


GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
Abstract
III. Approach: capturing and representing heuristics behind GPT's decision-making process
IV. Comparative results
V. Conclusion and future work
VI. Future work


Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
VI. Challenges and future directions


Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
4 Results and Discussion


RAISE -- Radiology AI Safety, an End-to-end lifecycle approach / 2311.14570 / ISBN:https://doi.org/10.48550/arXiv.2311.14570 / Published by ArXiv / on (web) Publishing site
2. Pre-Deployment phase
3. Production deployment monitoring phase
4. Post-market surveillance phase


Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
4. Addressing bias, transparency, and accountability


From deepfake to deep useful: risks and opportunities through a systematic literature review / 2311.15809 / ISBN:https://doi.org/10.48550/arXiv.2311.15809 / Published by ArXiv / on (web) Publishing site
2. Material and methods


Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
3 Transparency and explainability
4 Fairness and equity


Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
V. Technical defense mechanisms


From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
Applications and Visualizations
System Evaluation and Results


Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
2 Human intelligence
6 Measuring intelligence
8 Consciousness
11 Control of intelligence
15 Final thoughts


RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
2 Related work
4 Results & analysis
5 Discussion


Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
2 Risks of Misuse for Artificial Intelligence in Science
5 Discussion
Appendix C Detailed Implementation of SciGuard


Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
Study 1: Geo-cultural Differences in Offensiveness


Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
Culturally responsive AI – current landscape


Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
B Extended Guiding Principles


Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems / 2312.12559 / ISBN:https://doi.org/10.48550/arXiv.2312.12559 / Published by ArXiv / on (web) Publishing site
2 The Reign of Algorithmic Fairness


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
3 Problem Formulation
4 Learning Human Morality Judgments


Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
II. Theoretical background and hypotheses


Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
1. Introduction
7. Evaluation metrics and performance benchmarks
9. Conclusion


Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
3. LLMs in clinical and counseling psychology
5. LLMs in social and cultural psychology


AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
V. Discussion and suggestions
VI. Support mechanisms


Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
Abstract
Materials and methods
Results
Discussion


Resolving Ethics Trade-offs in Implementing Responsible AI / 2401.08103 / ISBN:https://doi.org/10.48550/arXiv.2401.08103 / Published by ArXiv / on (web) Publishing site
II. Approaches for Resolving Trade-offs


Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
Abstract
Contents / List of figures / List of tables / Acronyms
1 Introduction
I Understanding bias - 2 Bias and moral framework in AI-based decision making
3 Bias on demand: a framework for generating synthetic data with bias
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
III Accounting for bias - 7 Addressing fairness in the banking sector
8 Fairview: an evaluative AI support for addressing fairness
9 Towards fairness through time
IV Conclusions


FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
2 FAIR Data Principles: Theoretical Background and Significance
4 Framework for FAIR Data Principles Integration in LLM Development


Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
3. Use cases representing different image data types and their challenges and status for sharing
Towards Global Image Data Sharing: A to-do list for various stakeholders


Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
Abstract
3 Five specific goals and action-guiding strategies for ethical AI use in research practices


A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
2 Related work
4 RAI tool evaluation practices
5 Towards evaluation of RAI tool effectiveness
A List of RAI tools, with their primary publication
B RAI tools listed by target stage of AI development
D Summary of themes and codes


Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Detection
4 Tools and Evaluation Metrics
5 Discussion
6 Conclusion


Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
3 Beyond technical dimensions


Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
2. Literature review
4. Discussion


(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
3 Methods: case-based expert deliberation


POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
5 POLARIS framework application


Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Background
3. Intervention models from other fields
5. The framework in practice
6. Compliance with International Regulations


Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
2 Background


How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
4. Results


I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
5 Experiments


User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Current Taxonomy


Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background and Related Work
III. Unified Evaluation Framework For LLM Benchmarks
IV. Technological Aspects
V. Processual Elements
VII. Discussions


Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
2. Emergence of Free-Formed AI Collectives
3. Enhanced Performance of Free-Formed AI Collectives
4. Robustness of Free-Formed AI Collectives Against Risks


What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
5 Experiment Design
6 Results and Evaluation
A Appendix


The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
METRIC-framework for medical training data
Discussion
Methods
Additional information


The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
2 The EU AI Act
4 There is no trustworthy AI without HCI


Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
III. Ethical Considerations and Bias in AI-Driven Software Development for Autonomous Vehicles


Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
2 The Suitability of Generative AI for Newsroom Tasks


FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
2. Background


Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
3 Methodology
B Toolkits Considered for Inclusion


Implications of Regulations on the Use of AI and Generative AI for Human-Centered Responsible Artificial Intelligence / 2403.00148 / ISBN:https://doi.org/10.48550/arXiv.2403.00148 / Published by ArXiv / on (web) Publishing site
1 Motivation & Background


The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
Part 4. Model evaluation
Part 5. Interpretability of generative models
Table 1. Updated MI-CLAIM checklist for generative AI clinical studies.


Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
I. Introduction & Motivation
III. The AI-Enhanced CTI Processing Pipeline
IV. Challenges and Considerations


A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI


Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
4. Methods


Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
3. Analysis
4. Discussion


Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
4 Informational Fairness
5 Representational Fairness


Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
5 Towards Privacy- and Security-Aware Framework for Ethical AI


Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
1 Introduction


Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
3 Method


Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
3. Findings
4. Discussion
5. Concluding Remarks and Future Directions


AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
Abstract
Method
Results
AI Ethics Development Phases Based on Keyword Analysis
Key AI Ethics Issues
Limitations and Conclusion


Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
Methodology
Results


The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
Abstract
1 Context
2 Trustworthy AI Too Many Definitions or Lack Thereof?
6 Bias and Fairness
7 Explainable AI as an Enabler of Trustworthy AI
8 Implementation Framework
10 Summary and Next Steps
A Appendix


The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
3 Conceptualizing Fairness and Bias in ML
5 Ways to mitigate bias and promote Fairness


Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
1 Motivation
2 Existing AI Evaluation Approaches
3 Blueprints for AI Evaluation in Journalism


Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
2 Non-discrimination law vs. algorithmic fairness
3 Implications of the AI Act
4 Practical challenges for compliance


AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
6. Large Language Models (LLMs) - Introduction


A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
5 Vision Models and Multi-Modal Large Language Models


Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
4 Findings


The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
A Appendices


A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
4 Critical Survey
6 Conclusion and Outlook


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Learning under Distribution Shift
4 Assurance


Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations / 2401.13605 / ISBN:https://doi.org/10.48550/arXiv.2401.13605 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 The Need for Governance of AI
4 Public Opinion on AI Governance
5 Research Questions
6 Results
7 Discussion
Appendix


Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
6 Conclusions: Towards Humble Technical Practices


PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
3 Methodology
4 Evaluation
5 Conclusion


Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
I. Introduction


Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
6 Posthumanism


Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
2 Background & Related Work


Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
3 A Geo-Political AI Risk Taxonomy
4 European Union Artificial Intelligence Act


Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
1 Introduction


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
4 LLM Lifecycle


The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 3 Governance for safety
4 4 Auditing standards body, not standard audits


A Practical Multilevel Governance Framework for Autonomous and Intelligent Systems / 2404.13719 / ISBN:https://doi.org/10.48550/arXiv.2404.13719 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. A Practical Multilevel Governance Framework for AIs
IV. Application of the Framework for the Development of AIs


Designing Safe and Engaging AI Experiences for Children: Towards the Definition of Best Practices in UI/UX Design / 2404.14218 / ISBN:https://doi.org/10.48550/arXiv.2404.14218 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
4 Metrics for Assessing Trustworthiness, Reliability, and Safety in Human-AI Interaction


AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance / 2404.14660 / ISBN:https://doi.org/10.48550/arXiv.2404.14660 / Published by ArXiv / on (web) Publishing site
Abstract
1 Technical assessments require an AI expert to complete — and we don’t have enough experts
2 Procurement Loopholes Exist


A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
3 Finance
4 Medicine and Healthcare
6 Ethics


AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
1. Introduction


Responsible AI: Portraits with Intelligent Bibliometrics / 2405.02846 / ISBN:https://doi.org/10.48550/arXiv.2405.02846 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Conceptualization: Responsible AI
III. Data and Methodology
IV. Bibliometric Portraits of Responsible AI
V. Discussion and Conclusions
Supplemental Materials
Authors


Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines / 2405.03153 / ISBN:https://doi.org/10.48550/arXiv.2405.03153 / Published by ArXiv / on (web) Publishing site
4 Results


Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
V. Fairness of AIGC in 6G Network
VII. Challenges and Future Research Directions


RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
4 Method for Generating Responsible AI Guidelines
5 Evaluation of the 22 Responsible AI Guidelines


Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
Social-interactional harms
Design implications for LLM agents
Informing existing HCI approaches


Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
2 Between Human Intelligence and Technology: AGI’s Dual Value-Laden Pedigrees
3 The Motley Choices of AGI Discourse
A Dimensions of AGI: a Summary


Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
2 Related Work


A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Materials
3 Results
4 Discussion
5 Conclusions
Appendix


Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
3 Pilot-testing: UN Policy Documents Thematic Analysis Supported by GPT


When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
2 RQ1: What Happens When AI Eats Itself ?
3 RQ2: What Technical Strategies Can Be Employed to Mitigate the Negative Consequences of AI Autophagy?


Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
5.Quality Metrics Performance
6. Conclusion


The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
5 The Narrow Breadth of Industry’s Responsible AI Research
S1 Additional Analyses on Engagement Analysis
S2 Additional Analyses on Linguistic Analysis


Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
3 The Audit Procedure
4 Conducting the Pilots
6 Conclusion and Outlook
D Lifecycle Mapping of Pilot 1
E Lifecycle Mapping of Pilot 2: The GARMI Vision Module


Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
Methods in clinical AI fairness research
Discussion
Methods
Additional material


The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Discussion


Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
3. Method
A. Appendix


The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
Paper


Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
2 Mitigating (Unfair) Bias
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO


Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
3 Framework
4 Discussion


Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion


Evaluating AI fairness in credit scoring with the BRIO tool / 2406.03292 / ISBN:https://doi.org/10.48550/arXiv.2406.03292 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 ML model construction
4 Fairness violation analysis in BRIO
6 Risk analysis via BRIO for the German Credit Dataset
7 Revenue analysis
8 Conclusions


Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
3. Related Work


MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work


Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
6. Discussion
7. Conclusion


Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
3 Reductionism & Previous Research in Deceptive AI


An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
4 Findings
5 Discussion


Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
4 Global Regulatory Landscape of AI
A Supplemental Tables


Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Assuring fairness across the AI lifecycle
4 Assuring AI fairness in healthcare


Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
2 Background and related work


Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
III. Analysis
Appendix C Algorithmic / technical aspects


Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
Abstract
3 Strategies in Securing Large Language models
4 Challenges in Implementing Guardrails
7 Conclusion


Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
2 RELATED WORK


A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
III. ATTACKS ON DT-INTEGRATED AI ROBOTS


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
IV. SECGENAI FRAMEWORK REQUIREMENTS SPECIFICATIONS


Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. AI-driven tax policy to reduce economic inequality: a thought experiment


A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
4 Governance audits
5 Model audits
7 Clarifications and limitations
8 Conclusion


Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry / 2407.05339 / ISBN:https://doi.org/10.48550/arXiv.2407.05339 / Published by ArXiv / on (web) Publishing site
3 Practical implementation challenges | What to be prepared for?
5 Concluding remarks | Upfront investments vs. long-term benefits


Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
Abstract
2. The need to operationalise AI governance
6. Lessons learned from AstraZeneca’s 2021 AI audit
APPENDIX 1


Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
3 The need to audit AI systems – a confluence of top-down and bottom-up pressures


Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
D. Results for Claude 3


FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
Table 1


Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Global Divide in AI Regulation: Horizontally. Context-Specific
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework


CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
4 Design Framework


Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
VI. Future Opportunities in Affective Robotivs for Well-Being


Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
2 Literature Review
3 Methodology
6 Conclusion


Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
4 Generative AI and Humans: Risks and Mitigation


Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
Introduction


Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges / 2407.13926 / ISBN:https://doi.org/10.48550/arXiv.2407.13926 / Published by ArXiv / on (web) Publishing site
2. Ethics Frameworks


Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Assurance of AI Systems for Specific Functions


Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
2. Key Challenges of Artificial Data
3. OAK Dataset
Appendices


Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
4. Discussion


RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
I. Introduction


Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
5 Discussion: Grappling with the Scale and Interconnectedness of Foundation Models


Navigating the United States Legislative Landscape on Voice Privacy: Existing Laws, Proposed Bills, Protection for Children, and Synthetic Data for AI / 2407.19677 / ISBN:https://doi.org/10.48550/arXiv.2407.19677 / Published by ArXiv / on (web) Publishing site
2. American Privacy Rights Act of 2024
4. State-level Privacy Regulations in the US


Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
3. Deepfake Attribition and Recognition
5. Deepfakes Detection Method on Realistic Scenarios


Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background and Literature Review
3 Methodology
4 ESG-AI framework
5 Discussion


Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
2 Related Work


Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
3. Proposed framework
5. Model Training
6. Results


AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
II. Related Work
IV. Graphical User Interface (GUI)
V. Results
VI. Conclusion


Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
4 Exploring epistemic challenges


Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
III. A Guide for Data in LLM Research


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methodology & Guidelines
4 Data Preparation
7 Environmental Impact
8 Model Evaluation
10 Discussion


Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
II. Generative AI
V. Bridging Research Gaps and Future Directions


VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
3 Method


Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
3 Uncertainty Ex Machina


Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
2 Visualization Atlases : Examples and Collection
3 Visualization Atlas Design Patterns


Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
V. Challenges and Risks


The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
Related Work
Methods


Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
6 Fairness (F)
7 Privacy and Data Protection (P)
8 Safety and Robustness (SR)
10 Transparency and Explainability (T)
12 Concluding Remarks


Dataset | Mindset = Explainable AI | Interpretable AI / 2408.12420 / ISBN:https://doi.org/10.48550/arXiv.2408.12420 / Published by ArXiv / on (web) Publishing site
2. Literature Review
4. Experiment Implementation, Results and Analysis


Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
II. Related Work
IV. Attack Methodology


Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions
Appendix


Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
4 Background


What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
Implications for AI Creators and Users


Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications
6 Open Challenges
7 Guidelines and Recommendations
8 Conclusion and Final Remarks


A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field


Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
5. Annoyances or Dealbreakers?


The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
Limited research on ethics in complexity science


DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Prompting
6 Results


Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
D. CausalRating: A Tool To Rate Sentiments Analysis Systems for Bias


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
II. The Critics Are Killing the Baby


On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
5 Practical Implications


Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
Part I Modelling - Machine learning, computer vision and processing 1 Machine learning and computer vision for Earth observation
5 Physics-aware machine learning


LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
6 Experiment
7 Results
9 Limitations
10 Ethical Considerations
A Reproducibility
B Experiment Setup Details


Why business adoption of quantum and AI technology must be ethical / 2312.10081 / ISBN:https://doi.org/10.48550/arXiv.2312.10081 / Published by ArXiv / on (web) Publishing site
Argument by analogy: The case of sustainability


Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
7 Data Privacy
8 Performance Evaluation
9 Challenges and Opportunities


Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
Six fallacies that misinterpret language models


Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
IV. Findings and Resultant Themes
VI. Conclusion and Future directions


How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
4 Results
5 Open Challenges and Future Research Directions (RQ5)
8 Conclusion


Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
2 Methodology


Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
4 Results


ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 ValueCompass Framework


Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools / 2409.11489 / ISBN:https://doi.org/10.48550/arXiv.2409.11489 / Published by ArXiv / on (web) Publishing site
2 Ethical Considerations in AI-Enabled Optimization
3 Case Studies in AI-Enabled Optimization
Appendix B Technical and Conceptual Details for the Power Systems Case Study


Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
3 Method


XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
4 Experiments


Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
II. Views on Intelligence
III. Path Leading to AHI


Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Literature Review
3. Methodology
4. Framework Development
5. Analysis and Discussion
6. Conclusion


Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
5 Aims & Objectives (RQ1)
6 Methodologies & Capabilities (RQ2)
7 Limitations & Considerations (RQ3)
8 Discussion


Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
2 Inherent problems of AI related to medicine


Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
I Introduction
2 Related Work
3 Methods
5 Discussion
A Benchmarks in Open LLM Leaderboard


Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
VI. Collaborative Network
VII. Evaluations and Experiments


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
5 Examining LLM's Adherence to Design Principles and the Steerability of Value Preferences


Application of AI in Credit Risk Scoring for Small Business Loans: A case study on how AI-based random forest model improves a Delphi model outcome in the case of Azerbaijani SMEs / 2410.05330 / ISBN:https://doi.org/10.48550/arXiv.2410.05330 / Published by ArXiv / on (web) Publishing site
Methodology
Results
Appendix 1. Delphi model Python code (a logistic regression)
Appendix 2. Random forest Python code


AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
4 Experimental Setup
5 Results
6 Conclusion


From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
Introduction


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
A. Appendix


Study on the Helpfulness of Explainable Artificial Intelligence / 2410.11896 / ISBN:https://doi.org/10.48550/arXiv.2410.11896 / Published by ArXiv / on (web) Publishing site
3 An objective Methodology for evaluating XAI
4 Survey Results


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
5 Experimental setup


How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
Executive Summary
3 Methods
5 Mechanisms of Industry Influence in US AI Policy


Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
2 Ethics of Resisting LLM Inference


Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
4 Experiment


A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
3 Methodology


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
Abstract
II. Background and Concepts
V. Evaluation and Benchmarking


Ethical AI in Retail: Consumer Privacy and Fairness / 2410.15369 / ISBN:https://doi.org/10.48550/arXiv.2410.15369 / Published by ArXiv / on (web) Publishing site
1.0 Introduction


Redefining Finance: The Influence of Artificial Intelligence (AI) and Machine Learning (ML) / 2410.15951 / ISBN:https://doi.org/10.48550/arXiv.2410.15951 / Published by ArXiv / on (web) Publishing site
Current Perspective on AI & ML in Finance


Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
5 Discussion


Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
Introduction
Task Formulation


The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
VI. Evaluation Metrics
VIII. Research Gaps and Future Directions


The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
4 Free Evolution, The Imperative and Intent
5 The Runaway AGI Evolutionary Gap


Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
3 Trends in advanced AI safety and trustworthiness standardization


Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
4 Study Design & Methodology


Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
Methodology
Discussion


Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations / 2410.23432 / ISBN:https://doi.org/10.48550/arXiv.2410.23432 / Published by ArXiv / on (web) Publishing site
4 Recommendations
Appendices


Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
1 Introduction


Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
Classical Assessment Validation Theory and Responsible AI
The Evolution of Responsible AI for Assessment
Integrating Classical Validation Theory and Responsible AI


Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
4 Findings


A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Multimodal Interaction Across the Virtual Continuum
3. XR Applications: Expanding Multimodal Interactions Across Domains
4. Potential Risks and Ethical Challenges of XR and the Metaverse
5. General Discussion


I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
4 Findings
5 Discussion


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
Appendices


Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
6 Directions for future research



How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law / 2404.12762 / ISBN:https://doi.org/10.48550/arXiv.2404.12762 / Published by ArXiv / on (web) Publishing site
4 Legal Requirements: Decision-Centric
7 Summary


A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
III. From General to Medical-Specific LLMs
VI. Trustworthiness and Safety


The doctor will polygraph you now: ethical concerns with AI for fact-checking patients / 2408.07896 / ISBN:https://doi.org/10.48550/arXiv.2408.07896 / Published by ArXiv / on (web) Publishing site
2. Clinical, Technical, and Ethical Concerns


Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries / 2409.12197 / ISBN:https://doi.org/10.48550/arXiv.2409.12197 / Published by ArXiv / on (web) Publishing site
3 Methods


Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
Materials and methods
Supporting information


Persuasion with Large Language Models: a Survey / 2411.06837 / ISBN:https://doi.org/10.48550/arXiv.2411.06837 / Published by ArXiv / on (web) Publishing site
2 Application Domains
4 Experimental Design Patterns
6 Conclusion and Future Directions


Collaborative Participatory Research with LLM Agents in South Asia: An Empirically-Grounded Methodological Initiative and Agenda from Field Evidence in Sri Lanka / 2411.08294 / ISBN:https://doi.org/10.48550/arXiv.2411.08294 / Published by ArXiv / on (web) Publishing site
5 Discussion and Future Agenda


The EU AI Act is a good start but falls short / 2411.08535 / ISBN:https://doi.org/10.48550/arXiv.2411.08535 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methodology
3 Results
4 Discussion


Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Methods
5 Empirical Result
7 Response Accuracy
9 Fairness
12 Conclusion


Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
I. Introduction
VII. Metrics for Evaluating AI-Driven Multimodal UIs


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
4. Bias Evaluation
5. Bias Mitigation
7. Conclusion


Framework for developing and evaluating ethical collaboration between expert and machine / 2411.10983 / ISBN:https://doi.org/10.48550/arXiv.2411.10983 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Method


Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / on (web) Publishing site
3 Experimental framework
4 Results
Appendices


Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
I. Introduction
VI. Evaluation Benchmarks and Metrics
VII. Discussion


Good intentions, unintended consequences: exploring forecasting harms / 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
4 Findings: typology of harm in forecasting
6 A Research agenda


AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
6 Discussion: Benefits, Risks and Limitations


Human-centred test and evaluation of military AI / 2412.01978 / ISBN:https://doi.org/10.48550/arXiv.2412.01978 / Published by ArXiv / on (web) Publishing site
Summary
Full Summary


Artificial Intelligence Policy Framework for Institutions / 2412.02834 / ISBN:https://doi.org/10.48550/arXiv.2412.02834 / Published by ArXiv / on (web) Publishing site
III. Key Considerations for AI Policy
IV. Framework for AI Policy Development


Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
IV. Tools and Methods for RAG


From Principles to Practice: A Deep Dive into AI Ethics and Regulations / 2412.04683 / ISBN:https://doi.org/10.48550/arXiv.2412.04683 / Published by ArXiv / on (web) Publishing site
II AI Practice and Contextual Integrity
III AI Ethics and the notion of AI as uncharted moral territory


Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes / 2412.04796 / ISBN:https://doi.org/10.48550/arXiv.2412.04796 / Published by ArXiv / on (web) Publishing site
V. AI-Employee Well-being Interaction Framework


Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
II AI Practice and Contextual Integrity
III AI Ethics and the notion of AI as uncharted moral territory


Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methods


Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Technical Foundations for LLM Applications in Political Science
6 Future Directions & Challenges
7 Conclusion


Trustworthy artificial intelligence in the energy sector: Landscape analysis and evaluation framework / 2412.07782 / ISBN:https://doi.org/10.48550/arXiv.2412.07782 / Published by ArXiv / on (web) Publishing site
III. E-TAI – Methodological Framework for Trustworthy AI in the Energy Domain


Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
4. Representation: Digital and AI Technologies in Modern Electoral Processes


Reviewing Intelligent Cinematography: AI research for camera-based video production / 2405.05039 / ISBN:https://doi.org/10.48550/arXiv.2405.05039 / Published by ArXiv / on (web) Publishing site
2 Technical Background
4 Concluding Remarks
Appendices


Shaping AI's Impact on Billions of Lives / 2412.02730 / ISBN:https://doi.org/10.48550/arXiv.2412.02730 / Published by ArXiv / on (web) Publishing site
Introduction
I. Putting Pragmatic AI in Context
III. Harnessing AI for the Public Good


AI Ethics in Smart Homes: Progress, User Requirements and Challenges / 2412.09813 / ISBN:https://doi.org/10.48550/arXiv.2412.09813 / Published by ArXiv / on (web) Publishing site
5 AI Ethics from Technology's Perspective


Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
Research Phases and AI Tools


On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
II. Study Design


Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
Appendices


Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets / 2412.14062 / ISBN:https://doi.org/10.48550/arXiv.2412.14062 / Published by ArXiv / on (web) Publishing site
Abstract
2.0 Trust in Automation
3.0 Conclusions and Areas for Future Research


Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Theoretical Perspectives


Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
VII. Legal and Ethical Considerations in Autonomous Vehicle Security


Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
2 Literature Review
3 Methodology
4 Results
Appendices


Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
3 Value Misalignment
4 Robustness to Attack
6 Autonomous AI Risks
8 Interpretability for LLM Safety
9 Technology Roadmaps / Strategies to LLM Safety in Practice
10 Governance
11 Challenges and Future Directions


Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
V. Discussion and Synthesis


INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
4 Method
5 Experiments & Results
6 Conclusion


Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / on (web) Publishing site
4 Phase 2: Focus Groups


Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Evaluating Moral Learning Agents


Addressing Intersectionality, Explainability, and Ethics in AI-Driven Diagnostics: A Rebuttal and Call for Transdiciplinary Action / 2501.08497 / ISBN:https://doi.org/10.48550/arXiv.2501.08497 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 The Centrality of Intersectionality in Fairness and Diagnostics
5 The Ethical Role of Engineers in AI Development
6 Recommendations for an Inclusive and Ethical Framework
7 Conclusion


A Blockchain-Enabled Approach to Cross-Border Compliance and Trust / 2501.09182 / ISBN:https://doi.org/10.48550/arXiv.2501.09182 / Published by ArXiv / on (web) Publishing site
IV. Proposed Decentralized AI Governance Framework


Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv.2501.10453 / Published by ArXiv / on (web) Publishing site
4 Method
Supplementary


Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity / 2501.10467 / ISBN:https://doi.org/10.48550/arXiv.2501.10467 / Published by ArXiv / on (web) Publishing site
V. Future Directions and Research Opportunities


Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv.2501.10484 / Published by ArXiv / on (web) Publishing site
Introduction
Methodology
Results


Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv.2501.10685 / Published by ArXiv / on (web) Publishing site
4- Research on Market and Consumer Insights
6- Campaign Optimization and Management
7- Social Media and Community Engagement


Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) / 2501.11705 / ISBN:https://doi.org/10.48550/arXiv.2501.11705 / Published by ArXiv / on (web) Publishing site
A continuum of low- to high-stakes AI applications


Deploying Privacy Guardrails for LLMs: A Comparative Analysis of Real-World Applications / 2501.12456 / ISBN:https://doi.org/10.48550/arXiv.2501.12456 / Published by ArXiv / on (web) Publishing site
Introduction
Comparison of Deployments and Discussion


Investigation of the Privacy Concerns in AI Systems for Young Digital Citizens: A Comparative Stakeholder Analysis / 2501.13321 / ISBN:https://doi.org/10.48550/arXiv.2501.13321 / Published by ArXiv / on (web) Publishing site
IV. Results


A Critical Field Guide for Working with Machine Learning Datasets / 2501.15491 / ISBN:https://doi.org/10.48550/arXiv.2501.15491 / Published by ArXiv / on (web) Publishing site
Endnotes


Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv.2501.16531 / Published by ArXiv / on (web) Publishing site
2. Background
5. Findings
6. Discussion


Governing the Agent-to-Agent Economy of Trust via Progressive Decentralization / 2501.16606 / ISBN:https://doi.org/10.48550/arXiv.2501.16606 / Published by ArXiv / on (web) Publishing site
Architecting Trust: The Design and Mechanics of AgentBound To- kens
A Self-Sustaining Trust Economy


The Third Moment of AI Ethics: Developing Relatable and Contextualized Tools / 2501.16954 / ISBN:https://doi.org/10.48550/arXiv.2501.16954 / Published by ArXiv / on (web) Publishing site
Appendices


Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
4 Findings
5 Discussion


Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv.2501.18632 / Published by ArXiv / on (web) Publishing site
Introduction
Jailbreak Evaluation Method
Model Guardrail Enhancemen


DebiasPI: Inference-time Debiasing by Prompt Iteration of a Text-to-Image Generative Model / 2501.18642 / ISBN:https://doi.org/10.48550/arXiv.2501.18642 / Published by ArXiv / on (web) Publishing site
3 Method


Meursault as a Data Point / 2502.01364 / ISBN:https://doi.org/10.48550/arXiv.2502.01364 / Published by ArXiv / on (web) Publishing site
Abstract
II. Literature Review
III. Conceptual Framework


FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv.2502.03826 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
4. Methodologies
5. Experimental Protocol
6. Results


Open Foundation Models in Healthcare: Challenges, Paradoxes, and Opportunities with GenAI Driven Personalized Prescription / 2502.04356 / ISBN:https://doi.org/10.48550/arXiv.2502.04356 / Published by ArXiv / on (web) Publishing site
III. State-of-the-Art in Open Healthcare LLMs and AIFMs
IV. Leveraging Open LLMs for Prescription: A Case Study


Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
2 Vision Foundation Model Safety
6 Diffusion Model Safety
8 Open Challenges


The Odyssey of the Fittest: Can Agents Survive and Still Be Good? / 2502.05442 / ISBN:https://doi.org/10.48550/arXiv.2502.05442 / Published by ArXiv / on (web) Publishing site
Related Work
Results


Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv.2502.06059 / Published by ArXiv / on (web) Publishing site
3 Ambiguity and Conflicts in HHH
4 Priority Order


Comprehensive Framework for Evaluating Conversational AI Chatbots / 2502.06105 / ISBN:https://doi.org/10.48550/arXiv.2502.06105 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Related Work
III. Proposed Framework
IV. Key Metrics that capture the essence of ethical and governance complianc
V. Future Research Directions
VI. Conclusion


Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv.2502.07254 / Published by ArXiv / on (web) Publishing site
Paper


From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine / 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methods
5 Multimodal language models in medicine
6 Evaluation metrics for generative AI in medicine
7 Discussion


Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
Section 1: The Relational Norms Model
Section 2: Distinctive Characteristics of AI and Implications for Relational Norms


Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
2 Failure Modes
3 Risk Factors


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
3 Guidelines of Trustworthy Generative Foundation Models
4 Designing TrustGen, a Dynamic Benchmark Platform for Evaluating the Trustworthiness of GenFMs
5 Benchmarking Text-to-Image Models
6 Benchmarking Large Language Models
7 Benchmarking Vision-Language Models
10 Further Discussion


Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background and Challenges
III. ML/DL Applications in Surgical Tool Recognition
IV. ML/DL Applications in Surgical Workflow Analysis
V. ML/DL Applications in Surgical Training and Simulation


Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives / 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / on (web) Publishing site
3 Data Documentation
4 Enviromental Impact


Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
Abstract
2. Methodology
3. Results
4. Discussion


An LLM-based Delphi Study to Predict GenAI Evolution / 2502.21092 / ISBN:https://doi.org/10.48550/arXiv.2502.21092 / Published by ArXiv / on (web) Publishing site
3 Results


Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones / 2502.21248 / ISBN:https://doi.org/10.48550/arXiv.2502.21248 / Published by ArXiv / on (web) Publishing site
5 Policy Recommendations and Future Research Directions


Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
3. Methodology
5. Conclusion


Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence / 2503.00164 / ISBN:https://doi.org/10.48550/arXiv.2503.00164 / Published by ArXiv / on (web) Publishing site
5 Building an AI Cyber Threat Intelligence (CTI) Program


Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Core Concepts of Visual Language Modeling
IV. VLM Benchmarking and Evaluations
VI. Opportunities and Future Directions
VII. Conclusion


Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
2 Background, History and Resources
3 Personality Computing Systems
4 Discussion and Conclusion


Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China / 2503.05773 / ISBN:https://doi.org/10.48550/arXiv.2503.05773 / Published by ArXiv / on (web) Publishing site
4 Comparative Analysis and Evaluation of Effectiveness
5 Case Studies


Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
4 Detection and Evaluation of Medical Hallucinations
5 Mitigation Strategies
6 Experiments on Medical Hallucination Benchmark
7 Annotations of Medical Hallucination with Clinical Case Records
Appendices


Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance / 2503.06411 / ISBN:https://doi.org/10.48550/arXiv.2503.06411 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Philosophical and Ethical Considerations for AI


Generative AI in Transportation Planning: A Survey / 2503.07158 / ISBN:https://doi.org/10.48550/arXiv.2503.07158 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
3 Classical Transportation Planning Functions and Modern Transformations
5 Future Directions & Challenges
6 Conclusion


Mapping out AI Functions in Intelligent Disaster (Mis)Management and AI-Caused Disasters / 2502.16644 / ISBN:https://doi.org/10.48550/arXiv.2502.16644 / Published by ArXiv / on (web) Publishing site
3. Intelligent Disaster Mismanagement (IDMM)


AI Governance InternationaL Evaluation Index (AGILE Index) / 2502.15859 / ISBN:https://doi.org/10.48550/arXiv.2502.15859 / Published by ArXiv / on (web) Publishing site
3. Analysis and Observations


Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework / 2503.09969 / ISBN:https://doi.org/10.48550/arXiv.2503.09969 / Published by ArXiv / on (web) Publishing site
2 Results
4 Methods


MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
5 Methodology


LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions / 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
3 Methodology
4 Results
5 Discussion


DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
Referemces
Appendices


Synthetic Data for Robust AI Model Development in Regulated Enterprises / 2503.12353 / ISBN:https://doi.org/10.48550/arXiv.2503.12353 / Published by ArXiv / on (web) Publishing site
Synthetic Data Generation for Enterprise AI Development
Case Studies in Regulated Industries
Challenges and Limitations
Future Directions
Conclusion


Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
2 Transparency of CoT in Current LLMs
5 Policy Framework: Tiered-Access


Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy / 2503.14539 / ISBN:https://doi.org/10.48550/arXiv.2503.14539 / Published by ArXiv / on (web) Publishing site
Introduction


A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
Step-Around Prompting: A Research Tool and Potential Threat


Advancing Human-Machine Teaming: Concepts, Challenges, and Applications / 2503.16518 / ISBN:https://doi.org/10.48550/arXiv.2503.16518 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Taxonomies of HMT Systems
3 Empirical Studies to Promote Team Performance
4 Evaluation Methodologies of Human-Machine Teaming Systems (HMTSS)
6 Concepts, Challenges, and Applications of Human-Machine Teaming
7 Conclusions & Future Work


Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
2 Materials and methods
4 Discussion


Three Kinds of AI Ethics / 2503.18842 / ISBN:https://doi.org/10.48550/arXiv.2503.18842 / Published by ArXiv / on (web) Publishing site
3. Ethics in AI


HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
2 Legal and Regulatory Background
3 Standards and Guidelines
4 Proposed Methodology for AI Assessment
6 Discussion and Future Work
7 Conclusion


AI Family Integration Index (AFII): Benchmarking a New Global Readiness for AI as Family / 2503.22772 / ISBN:https://doi.org/10.48550/arXiv.2503.22772 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Research Methodology
3. Literature Review and Theoretical Framework
5. Insights and Interpretation
6. Discussions


A Systematic Decade Review of Trip Route Planning with Travel Time Estimation based on User Preferences and Behavior / 2503.23486 / ISBN:https://doi.org/10.48550/arXiv.2503.23486 / Published by ArXiv / on (web) Publishing site
III. Recent Works
IV. Limitations and Future Trends
V. Conclusion


BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models / 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
Abstract
2 Proposed Framework - BEATS
3 Key Findings
5 Conclusion
7 Appendix


Leveraging LLMs for User Stories in AI Systems: UStAI Dataset / 2504.00513 / ISBN:https://doi.org/10.48550/arXiv.2504.00513 / Published by ArXiv / on (web) Publishing site
5 Discussion


Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia / 2504.00652 / ISBN:https://doi.org/10.48550/arXiv.2504.00652 / Published by ArXiv / on (web) Publishing site
II. Background and Related Work
III. Methodology
IV. Comparative Analysis
VI. Towards Adaptive AI Governance Frameworks
VII. Future Research Priorities


Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice / 2504.00797 / ISBN:https://doi.org/10.48550/arXiv.2504.00797 / Published by ArXiv / on (web) Publishing site
3 Existing Scholarship in AI Ethics and Sustainability
4 Transversal Issues in AI Ethics and Sustainability
5 Establishing Best Practices for AI Ethics and Sustainability


Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents / 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
4. Taxonomy of AI Privacy and Ethical Incidents
Appendices


Ethical AI on the Waitlist: Group Fairness Evaluation of LLM-Aided Organ Allocation / 2504.03716 / ISBN:https://doi.org/10.48550/arXiv.2504.03716 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Methods
3 Results
4 Conclusion
Appendix


Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini / 2504.06436 / ISBN:https://doi.org/10.48550/arXiv.2504.06436 / Published by ArXiv / on (web) Publishing site
4. Results


Automating the Path: An R&D Agenda for Human-Centered AI and Visualization / 2504.07529 / ISBN:https://doi.org/10.48550/arXiv.2504.07529 / Published by ArXiv / on (web) Publishing site
Schematize
Report


Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
A Appendix


Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
Abstract
3 Why current evaluations approaches are insufficient for assessing interaction harms
4 Towards better evaluations of interaction harms


AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias / 2407.19655 / ISBN:https://doi.org/10.48550/arXiv.2407.19655 / Published by ArXiv / on (web) Publishing site
Introduction
2 Fairness Concerns in Healthcare
3 Addressing and Mitigating Unfairness in AI


A Comprehensive Survey on Integrating Large Language Models with Knowledge-Based Methods / 2501.13947 / ISBN:https://doi.org/10.48550/arXiv.2501.13947 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Solutions to address LLM challenges
5. Integrating LLMs with knowledge bases
6. Conclusion


Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation / 2502.05151 / ISBN:https://doi.org/10.48550/arXiv.2502.05151 / Published by ArXiv / on (web) Publishing site
2 Survey Methodology
3 AI Support for Individual Topics and Tasks
5 Conclusion
Appendix


Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future / 2502.08650 / ISBN:https://doi.org/10.48550/arXiv.2502.08650 / Published by ArXiv / on (web) Publishing site
3 Explainable AI
4 Best Practices for Responsible Generative AI and Existing Frameworks
5 Responsible AI Applications Across Domains
6 Discussion
7 Conclusion


>Publishing site
Executive Summary
Introduction to GenAI Evaluation
What is Being Evaluated?
Who Evaluates and How?
Other Considerations
How Long is the Evaluation Relevant?
Case Studies
Summary of Recommendations


Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
7. Future Research Directions


Designing AI-Enabled Countermeasures to Cognitive Warfare / 2504.11486 / ISBN:https://doi.org/10.48550/arXiv.2504.11486 / Published by ArXiv / on (web) Publishing site
2.0 Cognitive Warfare in Practice
3.0 AI-Enabled Cognitive Warfare
4.0 Functional Analysis


Framework, Standards, Applications and Best practices of Responsible AI : A Comprehensive Survey / 2504.13979 / ISBN:https://doi.org/10.48550/arXiv.2504.13979 / Published by ArXiv / on (web) Publishing site
1. Introduction
5. AI Standards and Regulations
7. RAI in Technology
8. Ongoing research and industry projects
9. Challenges and Best practices of RAI


Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions / 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / on (web) Publishing site
4 Related Work


Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts / 2504.16139 / ISBN:https://doi.org/10.48550/arXiv.2504.16139 / Published by ArXiv / on (web) Publishing site
V. Mapping of ISO Standards


Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
3. Challenges of current AI methods in education: The Elephant in the room


Approaches to Responsible Governance of GenAI in Organizations / 2504.17044 / ISBN:https://doi.org/10.48550/arXiv.2504.17044 / Published by ArXiv / on (web) Publishing site
IV. Solutions to Address Concerns
V. Implementation Plan: Toward Actionable GenAI Governance


Auditing the Ethical Logic of Generative AI Models / 2504.17544 / ISBN:https://doi.org/10.48550/arXiv.2504.17544 / Published by ArXiv / on (web) Publishing site
Abstract
Findings
Auditing the Reasoning Models


AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
4 Result
5 Discussion


A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption / 2504.19179 / ISBN:https://doi.org/10.48550/arXiv.2504.19179 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. An overview of the AI ecosystem in the medical field: processes, data, and stakeholders
4. Design framework for medical AI systems
6. Challenges towards the practical adoption of the design framework in healthcare
7. Conclusions and outlook


The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach / 2504.19255 / ISBN:https://doi.org/10.48550/arXiv.2504.19255 / Published by ArXiv / on (web) Publishing site
Abstract
Methodology
Findings


Balancing Creativity and Automation: The Influence of AI on Modern Film Production and Dissemination / 2504.19275 / ISBN:https://doi.org/10.48550/arXiv.2504.19275 / Published by ArXiv / on (web) Publishing site
4. Case Study Analysis
5. Discussion & Conclusion


Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
3 Evaluating AI Awareness in LLMs


AI Awareness / 2504.20084 / ISBN:https://doi.org/10.48550/arXiv.2504.20084 / Published by ArXiv / on (web) Publishing site
3 Evaluating AI Awareness in LLMs


TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language Models / 2504.20605 / ISBN:https://doi.org/10.48550/arXiv.2504.20605 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Prompt design and dataset generation
3 LLM Evaluation and Comparison with Related Work
5 Discussion and threats to validity


Federated learning, ethics, and the double black box problem in medical AI / 2504.20656 / ISBN:https://doi.org/10.48550/arXiv.2504.20656 / Published by ArXiv / on (web) Publishing site
2 What is federated learning?


Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation / 2504.21574 / ISBN:https://doi.org/10.48550/arXiv.2504.21574 / Published by ArXiv / on (web) Publishing site
7. Recommendations for Secure AI Adoption


From Texts to Shields: Convergence of Large Language Models and Cybersecurity / 2505.00841 / ISBN:https://doi.org/10.48550/arXiv.2505.00841 / Published by ArXiv / on (web) Publishing site
3 LLM Agent and Applications
4 Socio-Technical Aspects of LLM and Security


LLM Ethics Benchmark: A Three-Dimensional Assessment System for Evaluating Moral Reasoning in Large Language Models / 2505.00853 / ISBN:https://doi.org/10.48550/arXiv.2505.00853 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Customizing Moral Evaluation for LLMs
4 Proposed Methodology for Testing LLM Moral Reasoning
5 Experimental Results
6 Concluding Remarks and Future Directions


Securing the Future of IVR: AI-Driven Innovation with Agile Security, Data Regulation, and Ethical AI Integration / 2505.01514 / ISBN:https://doi.org/10.48550/arXiv.2505.01514 / Published by ArXiv / on (web) Publishing site
IV. The Role of AI in Modern IVR Systems


Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
III. Survey Mehodology
IV. Applications and Approaches
V. Technological Strengths
VI. Datasets for Emotion Management and Sentiment Analysis
VII. Evaluation Methodologies
VIII. Challenges and Weaknesses


Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
3 Three-Dimensional Safety Taxonomy for LLM Risk Mitigation


GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions / 2505.05523 / ISBN:https://doi.org/10.48550/arXiv.2505.05523 / Published by ArXiv / on (web) Publishing site
3. Methodology
4. Findings
5. Ethics, Opportunities and Future Directions


Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
Appendices


AI and Generative AI Transforming Disaster Management: A Survey of Damage Assessment and Response Techniques / 2505.08202 / ISBN:https://doi.org/10.48550/arXiv.2505.08202 / Published by ArXiv / on (web) Publishing site
II Domain-Specific Literature Review


Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / on (web) Publishing site
II Background


WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models / 2505.09595 / ISBN:https://doi.org/10.48550/arXiv.2505.09595 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Methodology and System Design
6 Discussion and Potential Limitations
Appendices


Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
2 Related Work


AI LEGO: Scaffolding Cross-Functional Collaboration in Industrial Responsible AI Practices during Early Design Stages / 2505.10300 / ISBN:https://doi.org/10.48550/arXiv.2505.10300 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 AI LEGO
5 Evaluation User Study
Appendices


Formalising Human-in-the-Loop: Computational Reductions, Failure Modes, and Legal-Moral Responsibility / 2505.10426 / ISBN:https://doi.org/10.48550/arXiv.2505.10426 / Published by ArXiv / on (web) Publishing site
Legal-Moral Responsibility


Sentience Quest: Towards Embodied, Emotionally Adaptive, Self-Evolving, Ethically Aligned Artificial General Intelligence / 2505.12229 / ISBN:https://doi.org/10.48550/arXiv.2505.12229 / Published by ArXiv / on (web) Publishing site
3 Our Proposed Model for Sentient AI
4 Preliminary Results and Evaluation
5 Future Work and Call to Action


Beyond Individual UX: Defining Group Experience(GX) as a New Paradigm for Group-centered AI / 2505.12780 / ISBN:https://doi.org/10.48550/arXiv.2505.12780 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 User Experience(UX)_ Group Experience(GX)!
3 Beyond Individuals: A New AI Paradigm for Groups
4 Conclusion


Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks / 2505.13565 / ISBN:https://doi.org/10.48550/arXiv.2505.13565 / Published by ArXiv / on (web) Publishing site
3 Positive taxonomy: Positive impact of AI on democracy
4 Risk taxonomy: risks posed by AI to democracy


Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies / 2505.15851 / ISBN:https://doi.org/10.48550/arXiv.2505.15851 / Published by ArXiv / on (web) Publishing site
4 Discussion


Advancing the Scientific Method with Large Language Models: From Hypothesis to Discovery / 2505.16477 / ISBN:https://doi.org/10.48550/arXiv.2505.16477 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction


Cultural Value Alignment in Large Language Models: A Prompt-based Analysis of Schwartz Values in Gemini, ChatGPT, and DeepSeek / 2505.17112 / ISBN:https://doi.org/10.48550/arXiv.2505.17112 / Published by ArXiv / on (web) Publishing site
Methods


A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit / 2505.17165 / ISBN:https://doi.org/10.48550/arXiv.2505.17165 / Published by ArXiv / on (web) Publishing site
4 Toolkit Design


SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use / 2505.17332 / ISBN:https://doi.org/10.48550/arXiv.2505.17332 / Published by ArXiv / on (web) Publishing site
4 Experiments


TEDI: Trustworthy and Ethical Dataset Indicators to Analyze and Compare Dataset Documentation / 2505.17841 / ISBN:https://doi.org/10.48550/arXiv.2505.17841 / Published by ArXiv / on (web) Publishing site
3 Analysis of Multimodal Datasets


Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods / 2505.17870 / ISBN:https://doi.org/10.48550/arXiv.2505.17870 / Published by ArXiv / on (web) Publishing site
2 Conceptual Framework: Model Immunization via Quarantined Falsehoods


AI Literacy for Legal AI Systems: A practical approach / 2505.18006 / ISBN:https://doi.org/10.48550/arXiv.2505.18006 / Published by ArXiv / on (web) Publishing site
5. Legal AI Systems Risk Assessment
Annex


Making Sense of the Unsensible: Reflection, Survey, and Challenges for XAI in Large Language Models Toward Human-Centered AI / 2505.20305 / ISBN:https://doi.org/10.48550/arXiv.2505.20305 / Published by ArXiv / on (web) Publishing site
2 Beyond Transparency: Why Explainability Is Essential for LLMs?
3 What Is XAI in the Context of LLMs?
4 How Can We Measure XAI in LLMs?
5 Audience-Centered XAI role in LLMs
6 Mechanistic Interpretability: From Circuits to Cognitive Tracing in LLMs
7 Designing Actionable and Governable XAI: Challenges and Research Frontiers


Can we Debias Social Stereotypes in AI-Generated Images? Examining Text-to-Image Outputs and User Perceptions / 2505.20692 / ISBN:https://doi.org/10.48550/arXiv.2505.20692 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background and Related Work
3 Study Design and Data
4 Methods
6 Discussion


Simulating Ethics: Using LLM Debate Panels to Model Deliberation on Medical Dilemmas / 2505.21112 / ISBN:https://doi.org/10.48550/arXiv.2505.21112 / Published by ArXiv / on (web) Publishing site
2 Related Work
6. Future Directions


Are Language Models Consequentialist or Deontological Moral Reasoners? / 2505.21479 / ISBN:https://doi.org/10.48550/arXiv.2505.21479 / Published by ArXiv / on (web) Publishing site
4 Methodology


Toward a Cultural Co-Genesis of AI Ethics / 2505.21542 / ISBN:https://doi.org/10.48550/arXiv.2505.21542 / Published by ArXiv / on (web) Publishing site
Theoretical Case – Cross-Cultural Narratives on the Ethics of the “Other”


Human-Centered Human-AI Collaboration (HCHAC) / 2505.22477 / ISBN:https://doi.org/10.48550/arXiv.2505.22477 / Published by ArXiv / on (web) Publishing site
3. Human Factor Research Methods of HAC
5. Human-Centered Human-AI Collaboration
7. Conclusion


Toward Effective AI Governance: A Review of Principles / 2505.23417 / ISBN:https://doi.org/10.48550/arXiv.2505.23417 / Published by ArXiv / on (web) Publishing site
III Results
IV Discussion


SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM Agents / 2505.23559 / ISBN:https://doi.org/10.48550/arXiv.2505.23559 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Experiment
Appendix


From Connectivity to Autonomy: The Dawn of Self-Evolving Communication Systems / 2505.23710 / ISBN:https://doi.org/10.48550/arXiv.2505.23710 / Published by ArXiv / on (web) Publishing site
Introduction


Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking / 2505.23930 / ISBN:https://doi.org/10.48550/arXiv.2505.23930 / Published by ArXiv / on (web) Publishing site
Abstract
Methods
Results
Discussion


Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / on (web) Publishing site
2 Related Work


Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products / 2506.00080 / ISBN:https://doi.org/10.48550/arXiv.2506.00080 / Published by ArXiv / on (web) Publishing site
4. Results
Appendix


Wide Reflective Equilibrium in LLM Alignment: Bridging Moral Epistemology and AI Safety / 2506.00415 / ISBN:https://doi.org/10.48550/arXiv.2506.00415 / Published by ArXiv / on (web) Publishing site
6. Normativity and the Limits of the Analogy
7. Operationalizing MWRE for LLM Alignment: Pathways, Pitfalls, and Technical Mechanisms
8. Future Research Directions and Broader Implications
9. Conclusion: Towards More Justified and Coherent AI Alignment


DeepSeek in Healthcare: A Survey of Capabilities, Risks, and Clinical Applications of Open-Source Large Language Models / 2506.01257 / ISBN:https://doi.org/10.48550/arXiv.2506.01257 / Published by ArXiv / on (web) Publishing site
Clinical Applications
Discussion
Appendix


Machine vs Machine: Using AI to Tackle Generative AI Threats in Assessment / 2506.02046 / ISBN:https://doi.org/10.48550/arXiv.2506.02046 / Published by ArXiv / on (web) Publishing site
4. Theoretical Framework for Vulnerability Scoring


Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation / 2506.02992 / ISBN:https://doi.org/10.48550/arXiv.2506.02992 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Experimental Design
5 Results and Analysis
7 Limitations and Future Work


Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs / 2506.05887 / ISBN:https://doi.org/10.48550/arXiv.2506.05887 / Published by ArXiv / on (web) Publishing site
5 Conclusions


Whole-Person Education for AI Engineers / 2506.09185 / ISBN:https://doi.org/10.48550/arXiv.2506.09185 / Published by ArXiv / on (web) Publishing site
II Literature Review
VII Appendix: Extracts from Participant Reflections