_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: llms


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: llms

Bibliography items where occurs: 314
From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
What is Generative Artificial Intelligence?


Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
System-role
Perturbation
Image-related
Hallucination
Generation-related
Conclusion


Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
5 Crowdsourced safety mechanism


Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
Abstract
Computers, Autonomy and Accountability


A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
4 General Verification Framework
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor
8 Regulations and Ethical Use
9 Discussions
10 Conclusions
Reference


Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
4 Discussion
5 A vision of AI-augmented pen-testing
6 Final ethical considerations
References


Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Methods and training process of LLMs
III. Comprehensive review of state-of-the-art LLMs
IV. Applied and technology implications for LLMs
V. Market analysis of LLMs and cross-industry use cases
VI. Solution architecture for privacy-aware and trustworthy conversational AI
VII. Discussions
VIII. Conclusion
Appendix A industry-wide LLM usecases


The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
4 Integrating red teaming, blue teaming, and ethics with violet teaming


Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
2 Related Works
3 Theory and Method
4 Experiment
Ethical Impact


The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
6 Regulation of AI and regulating through AI


The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
4. Methods
References


Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
Part 3 - 2 Machine Artist Models
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework


EALM: Introducing Multidimensional Ethical Alignment in Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
1 Introduction


ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Theoretical Impact of LLMs on Information Operations
ClausewitzGPT and Modern Strategy
Mathematical Foundations
Ethical and Strategic Considerations: AI Mediators in the Age of LLMs
Integrating Computational Social Science, Computational Ethics, Systems Engineering, and AI Ethics in LLMdriven Operations
Looking Forward: ClausewitzGPT
Conclusion


A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. What LLMs can do for healthcare? from fundamental tasks to advanced applications
3. From PLMs to LLMs for healthcare
4. Usage and data for healthcare LLM
5. Improving fairness, accountability, transparency, and ethics
6. Discussion
7. Conclusion
References


STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models
3 The applications of STREAM
4 Conclusion and Future Work
Author contributions statement


Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Regulation: A Short Introduction
3 LLMs: Risk and Uncertainty
4 Scientific Expertise, Social Media and Regulatory Capture
5 Regulation and NLP (RegNLP): A New Field
6 Conclusion


Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
Introduction
Conclusion


The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
4. A Holistic Framework


An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
4 Discussion
References


AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
11.References


Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
4. Nascent Extant Work that Falls Within the Ethics of AI Belief


Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Agent Design
4 Agent Performance
5 Related Work


Systematic AI Approach for AGI: Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Trifecta of AI Challenges


AI Alignment and Social Choice: Fundamental Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
5 Conclusion


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen


Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
Appendix A Evaluating Current Practices for Human-Participants Research


LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 A General Theory of Meaning
3 The Meaning Model
4 The Moral Model
5 Conclusion
A Supplementary Material


Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
Abstract
2 Overview of ChatGPT and its capabilities
References


Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related work
3 Method
4 Findings
5 Discussion
References


She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Works
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes
5 Conclusion


How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Methodology
4 Experiments
5 Conclusion
Ethical Considerations
References


Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 UnknownBench: Evaluating LLMs on the Unknown
3 Experiments
4 Related Work
5 Conclusion
References
B Confidence Elicitation Method Comparison
D Additional Results and Figures
E Prompt Templates


Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
1. Introduction


Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Proposed Process
3 Related Work and Discussion
4 Conclusion


Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
References


GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Background
IV. Comparative results
VI. Future work
References


Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Education and LLMS
III. Key technologies for EDULLMS
IV. LLM-empowered education
V. Key points in LLMSEDU
VI. Challenges and future directions
VII. Conclusion
References


Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Methodology
4 Results and Discussion
5 Conclusion and Future Work


Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
Abstract
FINDINGS
DISCUSSION


Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
V. Potential harms and mitigation


Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
3 Transparency and explainability


Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
III. The rise of large AI models


Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
2 Legal Basis of Privacy and Copyright Concerns over Generative AI


Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
2. The pitfalls in detecting generative AI output


Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
2 Risks of Misuse for Artificial Intelligence in Science
3 Control the Risks of AI Models in Science
6 Related Works
Appendix C Detailed Implementation of SciGuard


Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
V. Discussion
References


Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
Abstract
1 Objective
3 Materials and methods
4 Results
5 Discussion
6 Conclusion
References


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
6 Discussion
References


Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
References


Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
References


Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. LLMs in cognitive and behavioral psychology
3. LLMs in clinical and counseling psychology
4. LLMs in educational and developmental psychology
5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology
7. Challenges and future directions
8. Conclusion


Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
2. The generation of synthetic data


MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
III. Methodology: model development
VI. Discussion and future work
VII. Conclusion


Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
2 Background
5 Discussion


FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Data Management Challenges in Large Language Models
4 Framework for FAIR Data Principles Integration in LLM Development
5 Discussion
6 Conclusion
References
Appendices


Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
Abstract
1 The “Triple-Too” problem of AI ethics
2 A shift to user-centered realism in scientific contexts
3 Five specific goals and action-guiding strategies for ethical AI use in research practices
References


Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Generation
3 Detection
References


Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
1. Introduction


Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
2 Establishing the novel aspect of AI as a crossover technology
4 Recommendations to address threats posed by crossover AI technology


(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related work and our approach
4 Results


POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
5 POLARIS framework application


Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
Results
Discussion


How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
1. Introduction


I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Awareness in LLMs
4 Awareness Dataset: AWAREEVAL
5 Experiments
6 Conclusion
Limitation
Ethical Statement
References
A AWAREEVAL Dataset Details
B Experimental Settings & Results


Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Results
4 Discussion
References


User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
1 Introduction


Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Background and Related Work
III. Unified Evaluation Framework For LLM Benchmarks
IV. Technological Aspects
V. Processual Elements
VI. Human Dynamics
VII. Discussions
VIII. Conclusion
References


Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Emergence of Free-Formed AI Collectives
5. Open Challenges for Free-Formed AI Collectives
References
B. Sentence Making Simulation


What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
4 CosmoAgent Architecture
5 Experiment Design
7 Conclusion


Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
1 Introduction


Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
IV. AI’S Role in the Emerging Trend of Internet of Things (IOT) Ecosystem for Autonomous Vehicles
VI. AI and Learning Algorithms Statistics for Autonomous Vehicles


Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
2 The Suitability of Generative AI for Newsroom Tasks


The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
Abstract
References


A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI
5 Applications
References


Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
References


AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
2. What is AGI
5. Discussion


Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
References


Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Cyber Offense
4 Cyber Defence
References
Appendix A GPT3.5 and GPT4 OCO-scripting


Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Method
4 Experiment
5 Conclusion & Future Work


AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
Key Gaps


The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
8 Implementation Framework
9 A Few Suggestions for a Viable Path Forward
10 Summary and Next Steps
References


Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
References


Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
3 RQ1: What Factors Influence Members’ “Licens to Critique” when Discussing AI Ethics with their Team?


AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
6. Large Language Models (LLMs) - Introduction
8. Conclusions
9. References


Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Applications of Large Language Models in Legal Tasks
3 Fine-Tuned Large Language Models in Various Countries and Regions
4 Legal Problems of Large Languge Models
5 Data Resources for Large Language Models in Law
6 Conclusion and Future Directions
References


A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 What is a Language Model?
3 Proprietary vs. Open Source LLMs
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models
6 Model Tuning
7 Model Evaluation and Benchmarking
8 Conclusions
References


Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
4 Findings


Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
I. Introduction


Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
A Primer
Polarised Responses
Rebooting Machine Ethics
Language Model Agents in Society


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift
4 Assurance
5 Governance
6 Conclusion
References


Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work


PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background and Related Work
3 Methodology
4 Evaluation
5 Conclusion
References


Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 The Robots at Issue


Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
5 Case Studies
6 Discussion


Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
3 A Geo-Political AI Risk Taxonomy
4 European Union Artificial Intelligence Act


Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Method
4 Discussin and Implications
5 Limitations
6 Conclusion & Future Work


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Definition of LLM Supply Chain
3 LLM Infrastructure
4 LLM Lifecycle
5 Downstream Ecosystem
6 Conclusion
References


The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Audit the process, not just the product
References


Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Qualifying and Quantifying Emotions
3 Case Study #1: Linguistic Features of Emotion
4 Qualifying and Quantifying Ethics
5 Concluding Remarks
References


A Practical Multilevel Governance Framework for Autonomous and Intelligent Systems / 2404.13719 / ISBN:https://doi.org/10.48550/arXiv.2404.13719 / Published by ArXiv / on (web) Publishing site
I. Introduction


Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
2 Mechanistic Agency: A Common View in AI Practice
3 Volitional Agency: an Alternative Approach
4 Alternatives to AI as Agent
References


Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
2 Related Work


A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Surveys
3 Finance
4 Medicine and Healthcare
5 Law
6 Ethics
7 Conclusion


AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
2. Current State of AWS
3. AWS Proliferation and Threats to Academic Research


Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines / 2405.03153 / ISBN:https://doi.org/10.48550/arXiv.2405.03153 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Method
4 Results
5 Discussion
6 Conclusion


Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Motivation
3 Proposed Organizational Forms
4 Interaction Mechanisms
5 Governance and Organization
6 Unified Legal Framework
7 Conclusion
References


A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
Glossary of Terms
Executive Summary
1. Introduction
2. Methodology
3. A Spectrum of Scenarios of Open Data for Generative AI
5. Recommendations for Advancing Open Data in Generative AI


Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
II. Trustworthy AIGC in 6G Network
III. Adversarial of AIGC Models in 6G Network
V. Fairness of AIGC in 6G Network
VI. Case Study
VIII. Conclusion
References


RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
6 Discussion


Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
5 Analyses of the Design Process
6 User’s Attitude on ChatGPT’s Qualitative Analysis Assistance: from no to yes
7 Discussion
8 Limitations and Future Work
9 Conclusion


Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Social-interactional harms
Design implications for LLM agents


Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
3 The Motley Choices of AGI Discourse


Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
7 Discussion


The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Related Work
3. Methodology
4. Experiments
References


Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Background
3. What Are the Collective Decision Problems and their Alternatives in this Context?
10. Conclusion
References


A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Materials
3 Results
4 Discussion
5 Conclusions
Appendix
References


Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Quantitative Models of Emotions, Behaviors, and Ethics
4 Pilot Studies
5 Conclusion
Limitations
Appendix S: Multiple Adversarial LLMs


Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Coding in Thematic Analysis: Manual vs GPT-driven Approaches


When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 RQ2: What Technical Strategies Can Be Employed to Mitigate the Negative Consequences of AI Autophagy?


A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Threat Intelligence
III. Vulnerability Assessment
IV. Network Security
V. Privacy Preservation
VI. Awareness
VII. Cyber Security Operations Automation
VIII. Ethical LLMs
IX. Challenges and Open Problems
X. Conclusions
References


The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
1. Introduction


Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Related Work
3. Method
5. Interview Results: Opportunities and Concerns of Using LLMs in the Frontline
6. Discussion
A. Appendix


The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
References


Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO


Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
4 Comparative Analysis of Pre-Trained Models.
5 Discussion and further research
References


How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
I. Description of Method/Empirical Design
II. Risk Characteristics of LLMs
III. Impact of Alignment on LLMs’ Risk Preferences
IV. Impact of Alignments on Corporate Investment Forecasts
V. Robustness: Transcript Readability and Investment Score Predictability
VI. Conclusions
References
Figures and tables


MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Benchmark and Method
4 Experiments
5 Conclusion
References
Appendix


Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Related Work
References


The Impact of AI on Academic Research and Publishing / 2406.06009 / Published by ArXiv / on (web) Publishing site
Introduction
Ethics of AI for Writing Papers
AI in Editorial Processes
References


An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Theoretical Background
3 Methodology
4 Findings
5 Discussion
6 Conclusions and Recommendations
References


The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Why Ethics Matter in LLM Attacks?
3 Potential Misuse and Security Concerns
5 Preemptive Ethical Measures


Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Global Regulatory Landscape of AI
5 Generative AI: The New Frontier
7 Future Directions


Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
4 Results


Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Foundations and Integration of SI and LLM
III. Federated LLMs for Smarm Intelligence
IV. Learned Lessons and Open Challenges
V. Conclusion


Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
III. Analysis
References
Appendix C Algorithmic / technical aspects


Conversational Agents as Catalysts for Critical Thinking: Challenging Design Fixation in Group Design / 2406.11125 / ISBN:https://doi.org/10.48550/arXiv.2406.11125 / Published by ArXiv / on (web) Publishing site
1 INTRODUCTION
2 BEYOND RECOMMENDATIONS: ENHANCING CRITICAL THINKING WITH GENERATIVE AI
REFERENCES


Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Large Language Model Risks
3 Strategies in Securing Large Language models
4 Challenges in Implementing Guardrails
5 Open Source Tools
7 Conclusion
References


Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health / 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
Abstract
I. INTRODUCTION
II. RECENT ADVANCEMENTS IN LARGE LANGUAGE MODELS
III. CASE STUDIES : APPLICATIONS OF LLM S IN PATIENT ENGAGEMENT
IV. DISCUSSION AND F UTURE D IRECTIONS
V. CONCLUSION
REFERENCES


Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
4 RESULTS


AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background
3 Limitations of RLxF
4 The Internal Tensions and Ethical Issues in RLxF
5 Rebooting Safety and Alignment: Integrating AI Ethics and System Safety
6 Conclusion
References


A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
III. ATTACKS ON DT-INTEGRATED AI ROBOTS
IV. DT-INTEGRATED ROBOTICS DESIGN CONSIDERATIONS AND DISCUSSION
REFERENCES


Staying vigilant in the Age of AI: From content generation to content authentication / 2407.00922 / ISBN:https://doi.org/10.48550/arXiv.2407.00922 / Published by ArXiv / on (web) Publishing site
Art Practice: Human Reactions to Synthetic Fake Content
Emphasizing Reasoning Over Detection


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
II. UNDERSTANDING GENAI SECURITY


A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
2 Why audit generative AI systems?
5 Model audits


Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
3 The need to audit AI systems – a confluence of top-down and bottom-up pressures


Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
D. Results for Claude 3


Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
IV. Proposing an Alternative 3C Framework


CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background
3 Conceptual Foundations
4 Design Framework
6 Discussion
7 Conclusion
Limitations
Ethical Considerations


Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
VI. Future Opportunities in Affective Robotivs for Well-Being


With Great Power Comes Great Responsibility: The Role of Software Engineers / 2407.08823 / ISBN:https://doi.org/10.48550/arXiv.2407.08823 / Published by ArXiv / on (web) Publishing site
1 Introduction


Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Literature Review
3 Methodology
4 Data Analysis and Results
5 Discussion
6 Conclusion


Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
A brief history of AI and generative AI
Applications of generative AI in literature reviews and evidence synthesis
Applications of generative AI to real-world evidence (RWE):
Applications of generative AI to health economic modeling
Limitations of generative AI in HTA applications
Policy landscape
Glossary
Appendices


Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
4 Generative AI and Humans: Risks and Mitigation


Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
Introduction
Next Steps for AI Biosecurity Evaluations


Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
3 Assurance of AI Systems for Specific Functions
4 Assurance for General-Purpose AI
5 Assurance and Alignment for AGI
6 Summary and Conclusion
References


Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Key Challenges of Artificial Data
3. OAK Dataset
4. Automatic Prompt Generation
References
Appendices


RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background
III. Methodology
VI. Discussion
VII. Conclusion
References


Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Methods: Snowball and Structured Search
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models
References
A Appendix


Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
2 Related Work
References


Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
References
B Additional Materials for Pilot Survey


Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
Abstract
3. Proposed framework
4. Model architecture and training parameters
5. Model Training


Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
Introduction
I. The Why and How Behind LLMs
II. The Difference Between Academic and Commercial Research
III. A Guide for Data in LLM Research
IV. The Path Ahead


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
8 Model Evaluation
References


Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Generative AI
III. Language Modeling
IV. Challenges of Generative AI and LLMs
V. Bridging Research Gaps and Future Directions


VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
I Introduction
2 Related Works
3 Method
4 Experiment


Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Neuro-Symbolic AI


Conference Submission and Review Policies to Foster Responsible Computing Research / 2408.09678 / ISBN:https://doi.org/10.48550/arXiv.2408.09678 / Published by ArXiv / on (web) Publishing site
Use of Generative AI in CS Conference Publications


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
Introduction
I. AI and the Federal Arbitration ACt


CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Background and Related Works
3. Methodology
4. Experiment Results
5. Discussion and Future Works


Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
1 Main
2 Promises
3 Challenges
References
Tables


Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
9 Sustainability (SU)


Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Related Work
III. Generative AI
IV. Attack Methodology
V. Conclusion


Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Preliminaries
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions


A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background
3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field
5 Discussion
6 Conclusion
References


Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
5. Annoyances or Dealbreakers?
References


Preliminary Insights on Industry Practices for Addressing Fairness Debt / 2409.02432 / ISBN:https://doi.org/10.48550/arXiv.2409.02432 / Published by ArXiv / on (web) Publishing site
3 Method
4 Findings


DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Prior Benchmarks
3 Data Details
4 LLM Services (Infrastructure)
5 Prompting
6 Results
7 Limitations
9 Conclusion & Future Work
References
10 Appendix


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
II. The Critics Are Killing the Baby


Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
References


On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 A Creative Journey from Ada Lovelace to Foundation Models
3 Large Language Models and Boden’s Three Criteria
4 Easy and Hard Problems in Machine Creativity
5 Practical Implications
6 Conclusion
References


Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
Part III Communicating - Machine-user interaction, trustworthiness & ethics 6 User-centric Earth observation
7 Earth observation and society: the growing relevance of ethics


LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
Abstract
2 Related Work


Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
2 Foundation Models
3 Foundation Models in Healthcare
4 Multi-Modal Data Fusion
5 Data Quantity
6 Data Annotation
8 Performance Evaluation
9 Challenges and Opportunities
References


Ethical Artificial Intelligence Principles and Guidelines for the Governance and Utilization of Highly Advanced Large Language Models / 2401.10745 / ISBN:https://doi.org/10.48550/arXiv.2401.10745 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Background
Comprehending Advanced Large Language Models and its Capabilities
Advanced Large Language Models Governance Using AI Ethics
Considerations for Advanced Large Language Models and Policy-Making
Discussion
Conclusion


Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Hate Speech
3 Methodology
5 Future Directions
6 Conclusion


Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
1. Introduction


Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
Abstract
Language models as human participants
Six fallacies that misinterpret language models
Using language models to simulate roles and model cognitive processes
Concluding remarks


Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Conceptualization and frameworks
IV. Findings and Resultant Themes
V. Discussion
VI. Conclusion and Future directions
References


How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
2 Related Work
4 Results


ValueCompass: A Framework of Fundamental Values for Human-AI Alignment / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
4 Operationalizing ValueCompass: Methods to Measure Value Alignment of Humans and AI
5 Findings with ValueCompass: The Status Quo of Human-AI Value Alignment
References


Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations / 2409.13869 / ISBN:https://doi.org/10.48550/arXiv.2409.13869 / Published by ArXiv / on (web) Publishing site
Mutual Impacts: Technology and Democracy
Data and Results


GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background and Related Work
3 Chatbot Ad Engine Design
4 Effects of Ad Injection on LLM Performance
5 User Study Methodology
6 User Study Results
7 Discussion
8 Conclusion
References
A Appendix


XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Works
3 XTRUST Construction
4 Experiments
5 Conclusion
Limitations
References
Appendices


Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
III. Path Leading to AHI
IV. Human-Level AI and Challenges/Perspectives
References


Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
Abstract
2. Literature Review
3. Methodology
4. Framework Development


Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Systematic Review Methodology
5 Aims & Objectives (RQ1)
6 Methodologies & Capabilities (RQ2)
7 Limitations & Considerations (RQ3)
8 Discussion
9 Conclusion


Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
References


Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
4 AI safety issues related to large language models in medicine
5 Conclusion


Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
References
B Service-ready Features and Identifiers


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 An Analysis of Synthetically Generated Dilemma Vignettes and Human Values in Daily Dilemmas
4 Unweiling LLM's Value Preferences Through Action Choices in Everyday Dilemmas
5 Examining LLM's Adherence to Design Principles and the Steerability of Value Preferences
6 Related Work
7 Conclusion


AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Works
5 Results
Appendices


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
References


From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
The multiple levels of AI impact
The emerging social impacts of ChatGPT
Discussion
Conclusion


The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
IV. Results


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
A. Appendix


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Previous studies
3 Overview of cultural safety
6 Main results on evaluation set
7 Cultural safeguarding
9 Conclusion


Is ETHICS about ethics- Evaluating the ETHICS benchmark / 2410.13009 / ISBN:https://doi.org/10.48550/arXiv.2410.13009 / Published by ArXiv / on (web) Publishing site
References


How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
References


Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Ethics of Resisting LLM Inference
3 Threat Model
4 LLM Adversarial Attacks as LLM Inference Data Defenses
5 Experiments
6 Discussion
7 Conclusion and Limitations
References


Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background and Related Works
3 Methodology PCJAILBREAK
4 Experiment
5 Conclusion
Refefences


A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Methodology
5 Future Work and Discussion
References


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Background and Concepts
III. Jailbreak Attack Methods and Techniques
IV. Defense Mechanisms Against Jailbreak Attacks
V. Evaluation and Benchmarking
VI. Research Gaps and Future Directions
VII. Conclusion
References


Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
Introduction


Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background
3 Benchmark
4 Evaluation
5 Discussion
6 Conclusion and Future work
7 Potential Risks
8 Limitations
References


Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Task Formulation
Large Language Model Selection
Prompt engineering
Fine-tuning
Deployment considerations
Conclusions
Glossary
References


The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
References


TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
4 Discussion
5 Conclusion
References
Appendices


The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Methodology
4 Results
5 Discussion


The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
3 Assessing the Current State of Self-Awareness in Artificial Intelligent Systems


Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Trends in advanced AI safety and trustworthiness standardization
4 Conclusion
References


Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
3 Interactive-Reflective Dialogue Alignment (IRDA) System
7 Discussion


Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Defining Key Concepts
Theoretical Framework
Methodology
Discussion
Conclusion


Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Literature Review
3 Methodology
5 Discussion
6 FutureDirections
7 Conclusion
References


Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
Classical Assessment Validation Theory and Responsible AI
Integrating Classical Validation Theory and Responsible AI


Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Methods
4 Findings
5 Discussion


Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
2. AI chatbots in privacy and ethics research


I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background and Related Work
3 Method: Semi-structured Interviews
4 Findings
5 Discussion
6 Conclusion
Appendices
References


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
Appendices


CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
References


Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
6 Directions for future research



A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Background and Technology
III. From General to Medical-Specific LLMs
IV. Improving Algorithms for Med-LLMs
V. Applying Medical LLMs
VI. Trustworthiness and Safety
VII. Future Directions
VIII. Conclusions
References


The doctor will polygraph you now: ethical concerns with AI for fact-checking patients / 2408.07896 / ISBN:https://doi.org/10.48550/arXiv.2408.07896 / Published by ArXiv / on (web) Publishing site
2. Clinical, Technical, and Ethical Concerns
3. Methods
4. Results
5. Discussion
6. Conclusion
References:


Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Materials and methods
Results
Discussion
Supporting information


Persuasion with Large Language Models: a Survey / 2411.06837 / ISBN:https://doi.org/10.48550/arXiv.2411.06837 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Application Domains
3 Factors Influencing Persuasiveness
5 Ethical Considerations
References


Collaborative Participatory Research with LLM Agents in South Asia: An Empirically-Grounded Methodological Initiative and Agenda from Field Evidence in Sri Lanka / 2411.08294 / ISBN:https://doi.org/10.48550/arXiv.2411.08294 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Why South Asia Needs This Now
3 Proposed LLM4Participatory Research Framework
4 Field Work and Implementation Insights
References


Human-Centered AI Transformation: Exploring Behavioral Dynamics in Software Engineering / 2411.08693 / ISBN:https://doi.org/10.48550/arXiv.2411.08693 / Published by ArXiv / on (web) Publishing site
IV. Results


Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Transformer Architecture
5 Empirical Result
7 Response Accuracy
9 Fairness
References


Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Problem Statement: the Interface Dilemma
III. History and Evolution of User Interfaces
IV. Current App Frameworks and AI Integration
V. Multimodal Interaction
VI. Limitations, Challenges, and Future Directions for AI-Driven Interfaces
VII. Metrics for Evaluating AI-Driven Multimodal UIs
VIII. Conclusion
References


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
References
Abstract
1. Introduction
2. Intrinsic Bias
3. Extrinsic Bias
4. Bias Evaluation
5. Bias Mitigation
6. Ethical Concerns and Legal Challenges
7. Conclusion


Framework for developing and evaluating ethical collaboration between expert and machine / 2411.10983 / ISBN:https://doi.org/10.48550/arXiv.2411.10983 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Method


Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related work
3 Experimental framework
4 Results
5 Conclusion
References
Appendices


GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Background
3 Method
4 Results
5 Discussion
6 Conclusion
References


Good intentions, unintended consequences: exploring forecasting harms / 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
2 Harms in forecasting


AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
7 Related Work
References


Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Related Works
References
Author


Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
Introduction and Motivation


Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
References


Large Language Models in Politics and Democracy: A Comprehensive Survey / 2412.04498 / ISBN:https://doi.org/10.48550/arXiv.2412.04498 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Understanding Large Language Models
3. LLM Applications in Politics
4. Future Prospects
5. Conclusion
References


From Principles to Practice: A Deep Dive into AI Ethics and Regulations / 2412.04683 / ISBN:https://doi.org/10.48550/arXiv.2412.04683 / Published by ArXiv / on (web) Publishing site
IV Integrative AI Ethics


Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
IV Integrative AI Ethics


Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
1 Introduction
References


Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Preliminaries
3 Taxonomy on LLM for Political Science
4 Classical Political Science Functions and Modern Transformations
5 Technical Foundations for LLM Applications in Political Science
6 Future Directions & Challenges
7 Conclusion
References


Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
4. Representation: Digital and AI Technologies in Modern Electoral Processes


Towards Foundation-model-based Multiagent System to Accelerate AI for Social Impact / 2412.07880 / ISBN:https://doi.org/10.48550/arXiv.2412.07880 / Published by ArXiv / on (web) Publishing site
2 Preliminaries
3 Formulating the Problem
4 Designing Solution Methods
5 Testing and Deployment
References


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
Appendices


CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Methods
Establishing a framework for interactions in an autonomous digital city
Creating elements of an autonomous digital city
Discussion
Conclusion


Reviewing Intelligent Cinematography: AI research for camera-based video production / 2405.05039 / ISBN:https://doi.org/10.48550/arXiv.2405.05039 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Concluding Remarks


Shaping AI's Impact on Billions of Lives / 2412.02730 / ISBN:https://doi.org/10.48550/arXiv.2412.02730 / Published by ArXiv / on (web) Publishing site
I. Putting Pragmatic AI in Context
Appendices


Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
Research Phases and AI Tools
Discussion
Bibliography


On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
II. Study Design
III. Results
IV. Discussions
V. Threats to Validity
VI. Related Works
VII. Conclusions
References


Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets / 2412.14062 / ISBN:https://doi.org/10.48550/arXiv.2412.14062 / Published by ArXiv / on (web) Publishing site
Abstract
1.0 Introduction
2.0 Trust in Automation
3.0 Conclusions and Areas for Future Research
References


Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
I. Introduction
III. Theoretical Perspectives
IV. Applications


Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Taxonomy
3 Value Misalignment
4 Robustness to Attack
5 Misuse
6 Autonomous AI Risks
7 Agent Safety
8 Interpretability for LLM Safety
9 Technology Roadmaps / Strategies to LLM Safety in Practice
10 Governance
11 Challenges and Future Directions
12 Conclusion
References


Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
II. Methodology
V. Discussion and Synthesis
VI. Concluding Remarks and Future Directions


INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
References


Curious, Critical Thinker, Empathetic, and Ethically Responsible: Essential Soft Skills for Data Scientists in Software Engineering / 2501.02088 / ISBN:https://doi.org/10.48550/arXiv.2501.02088 / Published by ArXiv / on (web) Publishing site
III. Method


Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
Abstract
2. Learning Morality in Machines
3. Designing AI Agents based on Moral Principles
4. Evaluating Moral Learning Agents
5. Outlook & Implications
6. Conclusion


Towards A Litmus Test for Common Sense / 2501.09913 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
7 Mathematical Formulation for LLMs and AI
10 Conclusion


Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Related Works
Methodology
Results
Discussion and Conclusion


Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2- Technological Foundations
3- Content Creation and Personalization
4- Research on Market and Consumer Insights
5- Customer Communication and Assistance
6- Campaign Optimization and Management
7- Social Media and Community Engagement
8- Ethical Considerations in Marketing AI
9- Challenges and Opportunities
10- Case Studies and Real-world Applications
11- Discussion
12- Conclusion
References


Development of Application-Specific Large Language Models to Facilitate Research Ethics Review / 2501.10741 / ISBN:https://doi.org/10.48550/arXiv.2501.10741 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Problems with ethical review of human subjects research
III. Generative AI for IRB review
IV. Application-Specific IRB LLMs
V. Discussion: Potential Benefits, Risks, and Replies
VI. Conclusion
References


Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) / 2501.11705 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
References


Deploying Privacy Guardrails for LLMs: A Comparative Analysis of Real-World Applications / 2501.12456 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
State of the Art
Deployment 1: Data and Model Factory
Conclusions and Future Work


Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
5. Findings


Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Appendices


Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Background and Related Work
Jailbreak Evaluation Method
Model Guardrail Enhancemen
Limitations and Future Work
Conclusion
References


Constructing AI ethics narratives based on real-world data: Human-AI collaboration in data-driven visual storytelling / 2502.00637 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
2 Related Work
3 Methodology


FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Methodologies
7. Discussions and Conclusions


Open Foundation Models in Healthcare: Challenges, Paradoxes, and Opportunities with GenAI Driven Personalized Prescription / 2502.04356 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Background
III. State-of-the-Art in Open Healthcare LLMs and AIFMs
IV. Leveraging Open LLMs for Prescription: A Case Study
V. Conclusions
References


Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Large Language Model Safety
5 Vision-Language Model Safety
6 Diffusion Model Safety
7 Agent Safety
8 Open Challenges
9 Conclusion
References


The Odyssey of the Fittest: Can Agents Survive and Still Be Good? / 2502.05442 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Related Work
Method
Discussion


A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and its Emergence in University-Level Academic Writing / 2502.05698 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
References:


Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
1 Introduction
2 HHH Principle
3 Ambiguity and Conflicts in HHH
5 Trade-off or Synergy? Relationship Between Different Dimensions
References


Comprehensive Framework for Evaluating Conversational AI Chatbots / 2502.06105 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
References


Integrating Generative Artificial Intelligence in ADRD: A Framework for Streamlining Diagnosis and Care in Neurodegenerative Diseases / 2502.06842 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
High Quality Data Collection
Conclusion
References


Fairness in Multi-Agent AI: A Unified Framework for Ethical and Equitable Autonomous Systems / 2502.07254 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
References


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
Appendices


From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine / 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methods
4 Language models in medicine
5 Multimodal language models in medicine
7 Discussion
References


Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
References


Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
2 Failure Modes
3 Risk Factors
Appendices
References


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
3 Guidelines of Trustworthy Generative Foundation Models
4 Designing TrustGen, a Dynamic Benchmark Platform for Evaluating the Trustworthiness of GenFMs
5 Benchmarking Text-to-Image Models
6 Benchmarking Large Language Models
7 Benchmarking Vision-Language Models
8 Other Generative Models
9 Trustworthiness in Downstream Applications
10 Further Discussion
11 Conclusion
References


Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives / 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / on (web) Publishing site
References


Why do we do this?: Moral Stress and the Affective Experience of Ethics in Practice / 2502.18395 / ISBN:https://doi.org/10.48550/arXiv.2502.18395 / Published by ArXiv / on (web) Publishing site
5 Findings


Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Methodology
3. Results
4. Discussion
5. Conclusion
References


Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning / 2502.19798 / ISBN:https://doi.org/10.48550/arXiv.2502.19798 / Published by ArXiv / on (web) Publishing site
Abstract
Methods for AI Development Support
Measuring Vertical-Axis Developmental Stages in AI
Method of Experiential Learning in LLMs


Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities / 2502.20513 / ISBN:https://doi.org/10.48550/arXiv.2502.20513 / Published by ArXiv / on (web) Publishing site
Abstract
1 Theme and Goals


An LLM-based Delphi Study to Predict GenAI Evolution / 2502.21092 / ISBN:https://doi.org/10.48550/arXiv.2502.21092 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methods
4 Discussion
5 Conclusions
6 Acknowledgement


Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones / 2502.21248 / ISBN:https://doi.org/10.48550/arXiv.2502.21248 / Published by ArXiv / on (web) Publishing site
2 Defining Pre-Mortem AI Clones and Generative Ghosts


Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
2.Theoretical Framework
3. Methodology
4. Analysis and Results
5. Conclusion


Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions / 2503.00940 / ISBN:https://doi.org/10.48550/arXiv.2503.00940 / Published by ArXiv / on (web) Publishing site
5 Discussion


Digital Dybbuks and Virtual Golems: AI, Memory, and the Ethics of Holocaust Testimony / 2503.01369 / ISBN:https://doi.org/10.48550/arXiv.2503.01369 / Published by ArXiv / on (web) Publishing site
The permissibility of digital duplicates in Holocaust remembrance and education


Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
III. Core Concepts of Visual Language Modeling
IV. VLM Benchmarking and Evaluations
References


Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background, History and Resources
3 Personality Computing Systems
4 Discussion and Conclusion
References


AI Automatons: AI Systems Intended to Imitate Humans / 2503.02250 / ISBN:https://doi.org/10.48550/arXiv.2503.02250 / Published by ArXiv / on (web) Publishing site
2 Background & Related Work
References


Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 LLM Hallucinations in Medicine
3 Causes of Hallucinations
4 Detection and Evaluation of Medical Hallucinations
5 Mitigation Strategies
6 Experiments on Medical Hallucination Benchmark
7 Annotations of Medical Hallucination with Clinical Case Records
8 Survey on AI/LLM Adoption and Medical Hallucinations Among Healthcare Professionals and Researchers
Bibliography
Appendices


Generative AI in Transportation Planning: A Survey / 2503.07158 / ISBN:https://doi.org/10.48550/arXiv.2503.07158 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Preliminaries
4 Classical Transportation Planning Functions and Modern Transformations
5 Technical Foundations for Generative AI Applications in Transportation Planning
References


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
Appendices


MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Literature Review
3 Case Study
4 Taxonomy
5 Methodology
6 Results
7 Discussion
References


LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions / 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Methodology
4 Results
5 Discussion
6 Conclusion
References
Appendices


DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Methodology
4 Discussion
5 Conclusion
6 Acknowledgement
Appendices


Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
Abstract
1 Motivation
2 Transparency of CoT in Current LLMs
3 Arguments pro Transparent CoT
5 Policy Framework: Tiered-Access
6 Conclusion
References


Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy / 2503.14539 / ISBN:https://doi.org/10.48550/arXiv.2503.14539 / Published by ArXiv / on (web) Publishing site
Introduction


A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
Risks of Malicious Use of Step-Around Prompting
References


Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Discussion
References


Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI / 2503.16558 / ISBN:https://doi.org/10.48550/arXiv.2503.16558 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
II. Related Work
III. Case Study: PBL for Biomedical AI Education
IV. Challenges and Opportunities
References


HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
5 Case Study: Automated Triage Service in Health Care


Generative AI and News Consumption: Design Fictions and Critical Analysis / 2503.20391 / ISBN:https://doi.org/10.48550/arXiv.2503.20391 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work


BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models / 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Proposed Framework - BEATS
3 Key Findings
4 Limitations
5 Conclusion
6 Path Forward - Future Research Directions
7 Appendix


Leveraging LLMs for User Stories in AI Systems: UStAI Dataset / 2504.00513 / ISBN:https://doi.org/10.48550/arXiv.2504.00513 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Related Work
3 Study Design
4 Results
5 Discussion
6 Conclusion
References


Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice / 2504.00797 / ISBN:https://doi.org/10.48550/arXiv.2504.00797 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Transversal Issues in AI Ethics and Sustainability
References


Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 From Legal Frameworks to Technological Solutions
5 Discussion
6 Conclusions and Future Work
7 Authors’ Contributions
References


Ethical AI on the Waitlist: Group Fairness Evaluation of LLM-Aided Organ Allocation / 2504.03716 / ISBN:https://doi.org/10.48550/arXiv.2504.03716 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Methods
3 Results
4 Conclusion
5 Related Works
References


Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini / 2504.06436 / ISBN:https://doi.org/10.48550/arXiv.2504.06436 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. Artificial Intelligence
3. Materials and Methods
5. Conclusion


Automating the Path: An R&D Agenda for Human-Centered AI and Visualization / 2504.07529 / ISBN:https://doi.org/10.48550/arXiv.2504.07529 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction