if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: prompt
Bibliography items where occurs: 401
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Chapter 3 Technical AI Ethics
Appendix - A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
- Introductionn
- Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- 4 Results
- ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
- 3 The ESR Process
4 Deployment and Evaluation - GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Results
Discussion
Materials and Methods - Perceptions of the Fourth Industrial Revolution and Artificial Intelligence Impact on Society / 2308.02030 / ISBN:https://doi.org/10.48550/arXiv.2308.02030 / Published by ArXiv / on (web) Publishing site
- Introduction
- From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
- What is Generative Artificial Intelligence?
GREAT PLEA Ethical Principles for Generative AI in Healthcare - Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Introduction
System-role
Perturbation
Generation-related
Conclusion - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Human Factors - The Future of ChatGPT-enabled Labor Market: A Preliminary Study / 2304.09823 / ISBN:https://doi.org/10.48550/arXiv.2304.09823 / Published by ArXiv / on (web) Publishing site
- 2 Results
- A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
5 Falsification and Evaluation
8 Regulations and Ethical Use
9 Discussions - Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 LLM-based penetration testing
4 Discussion
5 A vision of AI-augmented pen-testing
6 Final ethical considerations - AIxArtist: A First-Person Tale of Interacting with Artificial Intelligence to Escape Creative Block / 2308.11424 / ISBN:https://doi.org/10.48550/arXiv.2308.11424 / Published by ArXiv / on (web) Publishing site
- Case study
Reflections - Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming / 2308.11649 / ISBN:https://doi.org/10.48550/arXiv.2308.11649 / Published by ArXiv / on (web) Publishing site
- 3 Emergence of Creative AI Tools and Game-Based Methodologies
- Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Methods and training process of LLMs
VI. Solution architecture for privacy-aware and trustworthy conversational AI - Artificial Intelligence in Career Counseling: A Test Case with ResumAI / 2308.14301 / ISBN:https://doi.org/10.48550/arXiv.2308.14301 / Published by ArXiv / on (web) Publishing site
- 3 Methods
4 Results and discussion - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 3 Theory and Method
4 Experiment
5 Conclusion - Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- 4 Human-centric AI
- The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. ChatGPT Training Process - Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Part 3 - 2 Machine Artist Models
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework
Acknowledgment - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- 4 Experiments
General References
F Evaluation of GPT Models - EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
5 Experiments
Appendix - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- V. Ethical & Legal Concerns
- Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
- Looking Forward: ClausewitzGPT
- The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 2 Research Design and Methodology
3 Analysis and Findings - A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 2. Literature Review
- A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- 2. What LLMs can do for healthcare? from fundamental tasks to
advanced applications
3. From PLMs to LLMs for healthcare
4. Usage and data for healthcare LLM
5. Improving fairness, accountability, transparency, and ethics - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 The applications of STREAM - Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 7. Future Research Direction
- Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
- Introduction
Pilot Study: Text SERPs with Ads
Evaluation of the Pilot Study
Conclusion - Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
- 3. Clinical Risks
- An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Datasets and Methods
3 Results
4 Discussion - A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- 4 Towards solving key ethical challenges in Medical AI
- Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- METHODS
- Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Agent Design
3 Agent Benchmark
4 Agent Performance
5 Related Work
Appendix A Data Details
Appendix B Experiment Details - Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- Contents
1 Introduction
2 AI feedback on specific problematic AI traits
3 Generalization from a Simple Good for Humanity Principle
4 Reinforcement Learning with Good-for-Humanity Preference Models
5 Related Work
7 Contribution Statement
A Model Glossary
B Trait Preference Modeling
C General Prompts for GfH Preference Modeling
H Samples
I Responses on Prompts from PALMS, LaMDA, and InstructGPT - The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
5 Discussion - Systematic AI Approach for AGI:
Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
- 5 System Design for AI Alignment
- AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- 2 Reinforcement Learning with Multiple Reinforcers
- A Comprehensive Review of
AI-enabled Unmanned Aerial Vehicle:
Trends, Vision , and Challenges / 2310.16360 / ISBN:https://doi.org/10.48550/arXiv.2310.16360 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- 3 Investigating the Ethical Values of
Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen - Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
- Appendix A
Evaluating Current Practices for
Human-Participants Research
Appendix B Placing Research Ethics for Human Participans in Historical Context - Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and
Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
- Introduction
- Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Overview of ChatGPT and its capabilities
4 Applications of ChatGPT in real-world scenarios
6 Limitations and potential challenges
8 Prompt engineering and generation
11 Conclusion - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- II. Sources of bias in AI
III. Impacts of bias in AI - Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Crafting an Ethical Dataset
4 A Multimodal Ethics Classifier - A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Pre-introduction
II. Introduction
III. Prehistoric prompting: pre NN-era
IV. History of NLP between 2010 and 2015: the pre-attention mechanism era
VI. 2015: birth of the transformer
VII. The second wave in 2017: rise of RL
VIII. The third wave 2018: the rise of transformers
IX. 2019: THE YEAR OF CONTROL
X. 2020-2021: the rise of LLMS
XI. 2022-current: beyond language generation
XII. Conclusions - Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- 3 Method
4 Findings
5 Discussion - She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes
5 Conclusion - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Methodology
4 Experiments
5 Conclusion
Limitations - Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 UnknownBench: Evaluating LLMs on the Unknown
3 Experiments
4 Related Work
B Confidence Elicitation Method Comparison
D Additional Results and Figures
E Prompt Templates - Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Chatbot approaches overview: Taxonomy of existing methods
7. Future Research Directions - Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
- 6 Limitations
- First, Do No Harm:
Algorithms, AI, and Digital Product Liability
Managing Algorithmic Harms Though Liability Law and Market Incentives / 2311.10861 / ISBN:https://doi.org/10.48550/arXiv.2311.10861 / Published by ArXiv / on (web) Publishing site
- The Problem
Appendix A - What is an Algorithmic Harm? And a Bibliography
Appendix C - List of General Harms Created by Digital Products Provided by Claude.AI - Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- 2 Proposed Process
- Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- 2 Background
4 Findings
5 Discussion
A Overview of AIIA Instruments - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
III. Approach: capturing and representing heuristics behind GPT's decision-making process
VI. Future work - Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- II. Education and LLMS
- Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Results
- Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 4. Addressing bias, transparency, and accountability
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 3 Transparency and explainability
5 Responsiblity, accountability, and regulations - Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
- 3 Mapping Challenges throughout the Data Lifecycle
- From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
- Software system features
- Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms / 2312.04839 / ISBN:https://doi.org/10.48550/arXiv.2312.04839 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Results - Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- 2. Research questions
6. Discussion - Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
- 2. The pitfalls in detecting generative AI output
3. Detectors are not useful - Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- 12 Large language models and Generative AI
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 6 Related Works
Appendix C Detailed Implementation of SciGuard
Appendix D Details of Benchmark Results - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Literature
The AI Assessment Scale - Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- Culturally responsive AI – current landscape
- Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- II. Background and motivation
- Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- 3 Materials and methods
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- 4 Learning Human Morality Judgments
5 Representational Alignment Supports Learning Multiple Human Values - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
6. Case studies and applications
7. Evaluation metrics and performance benchmarks - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 4. LLMs in educational and developmental psychology
5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology - MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- VI. Discussion and future work
- AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- III. Methods
- Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
III Accounting for bias - 7 Addressing fairness in the banking sector - Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 1 The “Triple-Too” problem of AI ethics
2 A shift to user-centered realism in scientific contexts
3 Five specific goals and action-guiding strategies for ethical AI use in research practices - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related work
4 RAI tool evaluation practices
5 Towards evaluation of RAI tool effectiveness
A List of RAI tools, with their primary publication
B RAI tools listed by target stage of AI development - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Generation
3 Detection
5 Discussion - Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- Abstract
2. Related literature
4. Findings
5. Discussion - (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 2 Related work and our approach
3 Methods: case-based expert deliberation
4 Results
5 Discussion
C Linear regression of participants' AI usage and desired responses - Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Discussion
- How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Results - I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Awareness Dataset: AWAREEVAL
5 Experiments
Limitation
A AWAREEVAL Dataset Details
B Experimental Settings & Results - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methods
3 Results
Appendix A - Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background and Related Work
IV. Technological Aspects
V. Processual Elements
VI. Human Dynamics
VII. Discussions - Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 The AIGC Copyright Dilemma: A What-if Analysis - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Enhanced Performance of Free-Formed AI Collectives
A. Cocktail Simulation
B. Sentence Making Simulation
C. Public Good Simulation - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 CosmoAgent Architecture
6 Results and Evaluation
A Appendix - Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
- II. The AI-Powered Development Life-Cycle in
Autonomous Vehicles
V. Review of Existing Research and Use Cases - Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
- 2 The Suitability of Generative AI for Newsroom Tasks
- FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 3. Methods
- Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- B Toolkits Considered for Inclusion
- The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Part 1. Study design
Part 2. A new train-test split for prompt development and few-shot learning
Part 5. Interpretability of generative models
Part 6. End-to-end pipeline replication
Table 1. Updated MI-CLAIM checklist for generative AI clinical studies. - A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- 2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI
5 Applications - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- B Baseline Setup
C Prompt Templates
D More Results - AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
- 3. The Potentials of AGI in Transforming Future Education
4. Ethical Issues and Concerns - Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
6 Ethics and Morality - Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Attacking GenAI
3 Cyber Offense
4 Cyber Defence - Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 3. Findings
- AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
- Results
AI Ethics Development Phases Based on Keyword Analysis
Key AI Ethics Issues - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Methodology
Results
Web Appendix A: Analysis of the Disinformation Manipulations - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- 8 Implementation Framework
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 6 How Users can be affected by unfair ML Systems
- Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
- 2 Methods
3 RQ1: What Factors Influence Members’ “Licens to Critique” when Discussing AI Ethics with their Team?
4 RQ2: How Do AI Ethics Discussions Unfold while Playing a Game Oriented toward Speculative Critique?
5 Discussion - AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
- 6. Large Language Models (LLMs) - Introduction
- Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- 2 Applications of Large Language Models in Legal Tasks
3 Fine-Tuned Large Language Models in Various Countries and Regions
4 Legal Problems of Large Languge Models - A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models
6 Model Tuning
7 Model Evaluation and Benchmarking
8 Conclusions - Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- 3 Method
5 Discussion - Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- A Primer
Rebooting Machine Ethics
Language Model Agents in Society - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Learning from Feedback
4 Assurance - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 3 Generative Ghosts: A Design Space
4 Benefits and Risks of Generative Ghost
5 Discussion - On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
- Ethical considera5ons
- PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 Methodology
4 Evaluation
5 Conclusion
A Dataset Filtering Prompts
B Instruction Generation Prompts
C GPT Scoring Prompt - Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
- 3 Scoping Review of Design Patterns,
Affordances, and Harms in AI
Interfaces
- Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- 3 A Geo-Political AI Risk Taxonomy
- Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- 2 Method
4 Discussin and Implications - Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 3 LLM Infrastructure
4 LLM Lifecycle - The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
- 3 3 Governance for safety
- Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
- 3 Case Study #1: Linguistic Features of Emotion
- Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
- 2 Mechanistic Agency: A Common View in AI Practice
- Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
- 3 Method
4 Findings
5 Discussion - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 3 Finance
4 Medicine and Healthcare
5 Law
6 Ethics - Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines / 2405.03153 / ISBN:https://doi.org/10.48550/arXiv.2405.03153 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Motivation - A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- Glossary of Terms
Executive Summary
1. Introduction
3. A Spectrum of Scenarios of Open Data for Generative AI
5. Recommendations for Advancing Open Data in Generative AI
Appendix - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Results - Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Trustworthy AIGC in 6G Network
III. Adversarial of AIGC Models in 6G Network
IV. Privacy of AIGC in 6G Network
V. Fairness of AIGC in 6G Network - RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
- 4 Method for Generating Responsible AI Guidelines
5 Evaluation of the 22 Responsible AI Guidelines - Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Methods
4 Users’ Experiences and Challenges with ChatGPT
5 Analyses of the Design Process
6 User’s Attitude on ChatGPT’s Qualitative Analysis Assistance: from no to yes
7 Discussion
8 Limitations and Future Work
9 Conclusion
A Appendix - Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
- Introduction
Social-interactional harms
Design implications for LLM agents - Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
- 3 Overview of Speech Generation
- The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Methodology
4. Experiments - Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- 2. Background
3. What Are the Collective Decision Problems and their Alternatives in this Context?
5. What Is the Format of Human Feedback?
6. How Do We Incorporate Diverse Individual Feedback? - A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Materials
3 Results
4 Discussion - Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Quantitative Models of Emotions, Behaviors, and Ethics
4 Pilot Studies
Appendix S: Multiple Adversarial LLMs - Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- 2 Coding in Thematic Analysis: Manual vs GPT-driven Approaches
5 Discussion and Limitations
6 OpenAI Updates on Policies and Model Capabilities: Implications for Thematic Analysis
7 Conclusion - Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
- 6.
Conclusion
- A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Threat Intelligence
IV. Network Security
VII. Cyber Security Operations Automation - Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
- Methods in clinical AI fairness research
- The ethical situation of DALL-E 2
/ 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Understanding what can DALL-E 2 actually do - Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- 3. Method
4. Quantitative Results
5. Interview Results: Opportunities and Concerns of Using LLMs in the Frontline
6. Discussion
A. Appendix - The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
- Paper
- Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
4 Comparative Analysis of Pre-Trained Models. - How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- Introduction
I. Description of Method/Empirical Design
III. Impact of Alignment on LLMs’ Risk Preferences
IV. Impact of Alignments on Corporate Investment Forecasts
Figures and tables - Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
- 4. Desiderata
6. Discussion - MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Benchmark and Method - Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Bias Evaluation
4. Methodology
5. Results
6. Discussion
7. Conclusion
Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models - The Impact of AI on Academic Research and Publishing / 2406.06009 / Published by ArXiv / on (web) Publishing site
- Introduction
Ethics of AI for Writing Papers - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Discussion - The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Why Ethics Matter in LLM Attacks?
3 Potential Misuse and Security Concerns
4 Towards Ethical Mitigation: A Proposed Methodology
6 Ethical Response to LLM Attacks - Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- 4 Global Regulatory Landscape of AI
- Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
- 3 Assuring fairness across the AI lifecycle
4 Assuring AI fairness in healthcare - Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
- II. Foundations and Integration of SI and LLM
III. Federated LLMs for Smarm Intelligence - Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
- III. Analysis
Appendix C Algorithmic / technical aspects - Conversational Agents as Catalysts for Critical Thinking: Challenging Design Fixation in Group Design / 2406.11125 / ISBN:https://doi.org/10.48550/arXiv.2406.11125 / Published by ArXiv / on (web) Publishing site
- 2 BEYOND RECOMMENDATIONS:
ENHANCING CRITICAL THINKING WITH
GENERATIVE AI
6 POTENTIAL DESIGN CONSIDERATIONS - Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Large Language Model Risks
3 Strategies in Securing Large Language models
4 Challenges in Implementing Guardrails
7 Conclusion - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health
/ 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
- III. CASE STUDIES : APPLICATIONS OF LLM S IN PATIENT
ENGAGEMENT
- AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Limitations of RLxF
4 The Internal Tensions and Ethical Issues in RLxF - A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
- III. ATTACKS ON DT-INTEGRATED AI ROBOTS
- Staying vigilant in the Age of AI: From content generation to content authentication / 2407.00922 / ISBN:https://doi.org/10.48550/arXiv.2407.00922 / Published by ArXiv / on (web) Publishing site
- Emphasizing Reasoning Over Detection
Prospective Usage: Assessing Veracity in Everyday Content - SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
- II. UNDERSTANDING GENAI SECURITY
III. CRITICAL ANALYSIS - A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 5 Model audits
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- B Details of Instructions
- PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- E GPT Scoring Prompt
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- IV. Proposing an Alternative 3C Framework
- CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
- 4 Design Framework
5 Case Studies - Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
- Abstract
- Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
- Introduction
Applications of generative AI in literature reviews and evidence synthesis
Applications of generative AI to real-world evidence (RWE):
Applications of generative AI to health economic modeling
Limitations of generative AI in HTA applications
Glossary - Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
- 4 Generative AI and Humans: Risks and Mitigation
- Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Introduction
- Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 1 Introduction: Assurance for Traditional Systems
3 Assurance of AI Systems for Specific Functions
4 Assurance for General-Purpose AI
5 Assurance and Alignment for AGI - Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
- 3. OAK Dataset
4. Automatic Prompt Generation
5. Use Considerations
Appendices - Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Discussion - RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background
III. Methodology
IV. Results
V. Benchmarking with Chat GPT4 Default Interface
VI. Discussion
VII. Conclusion - Nudging Using Autonomous Agents: Risks and Ethical Considerations / 2407.16362 / ISBN:https://doi.org/10.48550/arXiv.2407.16362 / Published by ArXiv / on (web) Publishing site
- 4 Ethical Considerations
5 Principles for the Nudge Lifecycle - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- 4 Mapping Individual, Social, and Biospheric Impacts of Foundation
Models
A Appendix - Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Findings
A Example Storyboards - AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- 6 Conclusion
B Additional Materials for Pilot Survey - AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
- IV. Graphical User Interface (GUI)
- Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
- I. The Why and How Behind LLMs
III. A Guide for Data in LLM Research
IV. The Path Ahead - The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- 3 Data Sources
8 Model Evaluation
9 Model Release & Monitoring - Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- II. Generative AI
IV. Challenges of Generative AI and LLMs - VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
- Abstract
I Introduction
2 Related Works
3 Method
4 Experiment
5 Limitation and Future Work
6 Conclusion
Appendices - Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- V. Challenges and Risks
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- I. AI and the Federal Arbitration ACt
- CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 2. Background and Related Works
3. Methodology
4. Experiment Results
5. Discussion and Future Works - The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
- Methods
- Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- 2 Promises
3 Challenges
4 Needs - Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 6 Fairness (F)
7 Privacy and Data Protection (P)
11 Truthfulness (TR) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Generative AI
IV. Attack Methodology - Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions - A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
- 3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field - Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- 5. Annoyances or Dealbreakers?
- The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
- Practical considerations for ethical actions in complexity science
- AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
- Results
- DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
- 2 Prior Benchmarks
4 LLM Services (Infrastructure)
5 Prompting
6 Results
7 Limitations
9 Conclusion & Future Work
10 Appendix - Exploring AI Futures Through Fictional News Articles / 2409.06354 / ISBN:https://doi.org/10.48550/arXiv.2409.06354 / Published by ArXiv / on (web) Publishing site
- Discussion and conclusion
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- II. The Critics Are Killing the Baby
- On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
- 3 Large Language Models and Boden’s Three Criteria
4 Easy and Hard Problems in Machine Creativity
5 Practical Implications - Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
- Part I Modelling - Machine learning, computer vision
and processing
1
Machine learning and computer vision for Earth observation
Part III Communicating - Machine-user interaction, trustworthiness & ethics 6 User-centric Earth observation - LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Dataset
5 Retrieval-Augmented Generation
6 Experiment
7 Results
A Reproducibility
E Used Prompts - Views on AI aren't binary -- they're plural / 2312.14230 / ISBN:https://doi.org/10.48550/arXiv.2312.14230 / Published by ArXiv / on (web) Publishing site
- Abstract
- Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
- 2 Foundation Models
3 Foundation Models in Healthcare
4 Multi-Modal Data Fusion
6 Data Annotation
7 Data Privacy
8 Performance Evaluation
9 Challenges and Opportunities - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Future Directions - Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
- 5. Conclusion
- Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Abstract
Language models as human participants
Six fallacies that misinterpret language models
Using language models to simulate roles and model cognitive processes - Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- IV. Findings and Resultant Themes
- How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
- 5 Open Challenges and Future Research
Directions (RQ5)
- Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
- 1 Related Work
2 Methodology
5 Discussion - Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 4 Results
- ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 ValueCompass Framework
4 Experimental Settings
6 Discussion and Limitation - Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 2 Related Research
3 Method
4 Findings
5 Discussion - Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations / 2409.13869 / ISBN:https://doi.org/10.48550/arXiv.2409.13869 / Published by ArXiv / on (web) Publishing site
- Abstract
Data and Results - GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work
3 Chatbot Ad Engine Design
4 Effects of Ad Injection on LLM Performance
5 User Study Methodology
6 User Study Results
7 Discussion
A Appendix - XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
- 2 Related Works
3 XTRUST Construction
4 Experiments
Appendices - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
4. Framework Development
6. Conclusion - Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
- 3 Systematic Review Methodology
5 Aims & Objectives (RQ1)
6 Methodologies & Capabilities (RQ2)
7 Limitations & Considerations (RQ3) - Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
- IV. Methodology
V. Results - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
- III. Research Methodology
IV. Results - DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Daily Dilemmas: A Dataset of Everyday Dilemmas
5 Examining LLM's Adherence to Design Principles and the Steerability of Value Preferences - AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Works
3 AI Press System
4 Experimental Setup
Appendices - From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
- The multiple levels of AI impact
The emerging social impacts of ChatGPT - The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
- IV. Results
V. Discussion - Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- A. Appendix
- Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models
/ 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Previous studies
4 Cultural safety dataset
5 Experimental setup
6 Main results on evaluation set
7 Cultural safeguarding - Is ETHICS about ethics- Evaluating the ETHICS benchmark / 2410.13009 / ISBN:https://doi.org/10.48550/arXiv.2410.13009 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Misunderstanding the nature of general moral theories
4 Poor quality of prompts and labels - Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Ethics of Resisting LLM Inference
3 Threat Model
4 LLM Adversarial Attacks as LLM Inference Data Defenses
5 Experiments
6 Discussion
7 Conclusion and Limitations
8 Ethics Considerations and Compliance with the Open Science Policy
Appendices - Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Related Works
3 Methodology PCJAILBREAK
4 Experiment
5 Conclusion
Refefences - A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Analysis
Appendices - Confrontation or Acceptance: Understanding Novice Visual Artists' Perception towards AI-assisted Art Creation / 2410.14925 / ISBN:https://doi.org/10.48550/arXiv.2410.14925 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
5 RQ1: Evolution of the Opinions Towards AI Tools
6 RQ2: Practices of AI Tools
9 General Discussions and Design Implications - Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background and Concepts
III. Jailbreak Attack Methods and Techniques
IV. Defense Mechanisms Against Jailbreak Attacks
V. Evaluation and Benchmarking
VI. Research Gaps and Future Directions
VII. Conclusion - Ethical AI in Retail: Consumer Privacy and Fairness / 2410.15369 / ISBN:https://doi.org/10.48550/arXiv.2410.15369 / Published by ArXiv / on (web) Publishing site
- 2.0 Literature Review
- Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
- Vernacularization as a General AI Safety Operationalization Methodology
- Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
4 Evaluation
5 Discussion
Supplementary Materials - Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Large Language Model Selection
Prompt engineering
Fine-tuning
Deployment considerations
Conclusions
Glossary - The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Fundamentals of Diffusion Models and Detection Challenges
IV. Detection Methods Based on Textual and Multimodal Analysis for Text-to-Image Models
V. Datasets and Benchmarks
VI. Evaluation Metrics
VIII. Research Gaps and Future Directions - TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
Appendices - The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Methodology
4 Results
5 Discussion - Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
- 3 Trends in advanced AI safety and trustworthiness standardization
- Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work
3 Interactive-Reflective Dialogue Alignment (IRDA) System
5 Results: Study 1 - Multi-Agent Apple Farming
6 Results: Study 2 - The Moral Machine
7 Discussion
Appendices - Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
- Introduction
Defining Key Concepts
Theoretical Framework
Methodology
Conclusion - Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations / 2410.23432 / ISBN:https://doi.org/10.48550/arXiv.2410.23432 / Published by ArXiv / on (web) Publishing site
- 3 Research Considerations
- Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Methodology
5 Discussion
6 FutureDirections - Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
- The Evolution of Responsible AI for Assessment
Integrating Classical Validation Theory and Responsible AI - Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Methods
4 Findings
5 Discussion - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
- 2. AI chatbots in privacy and ethics research
4. Results - A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
- 3. XR Applications: Expanding Multimodal Interactions Across Domains
4. Potential Risks and Ethical Challenges of XR and the Metaverse - I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
4 Findings
5 Discussion
6 Conclusion - Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models
/ 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Appendices
- Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 6 Directions for future research
- How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law / 2404.12762 / ISBN:https://doi.org/10.48550/arXiv.2404.12762 / Published by ArXiv / on (web) Publishing site
- 3 Properties of XAI-Methods (Possibly)
Relevant for their Legal Use
- A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background and Technology
III. From General to Medical-Specific LLMs
IV. Improving Algorithms for Med-LLMs
V. Applying Medical LLMs
VI. Trustworthiness and Safety - The doctor will polygraph you now: ethical concerns with AI for fact-checking patients / 2408.07896 / ISBN:https://doi.org/10.48550/arXiv.2408.07896 / Published by ArXiv / on (web) Publishing site
- 3. Methods
4. Results - Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries / 2409.12197 / ISBN:https://doi.org/10.48550/arXiv.2409.12197 / Published by ArXiv / on (web) Publishing site
- 3 Results
4 Discussion - Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
- Materials and methods
- Persuasion with Large Language Models: a Survey / 2411.06837 / ISBN:https://doi.org/10.48550/arXiv.2411.06837 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Application Domains
3 Factors Influencing Persuasiveness - Human-Centered AI Transformation: Exploring Behavioral Dynamics in Software Engineering / 2411.08693 / ISBN:https://doi.org/10.48550/arXiv.2411.08693 / Published by ArXiv / on (web) Publishing site
- IV. Results
- Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
- 3 Transformer Architecture
- Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
- II. Problem Statement: the Interface Dilemma
- Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
- 3. Extrinsic Bias
4. Bias Evaluation
5. Bias Mitigation
6. Ethical Concerns and Legal Challenges - Framework for developing and evaluating ethical collaboration between expert and machine / 2411.10983 / ISBN:https://doi.org/10.48550/arXiv.2411.10983 / Published by ArXiv / on (web) Publishing site
- 2. Method
- Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related work
3 Experimental framework
4 Results
Appendices - Artificial Intelligence in Cybersecurity: Building Resilient Cyber Diplomacy Frameworks / 2411.13585 / ISBN:https://doi.org/10.48550/arXiv.2411.13585 / Published by ArXiv / on (web) Publishing site
- Paper
- GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 Method
4 Results - Good intentions, unintended consequences: exploring forecasting harms
/ 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
- 3 Methods
4 Findings: typology of harm in forecasting
5 Discussion - AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
- 2 Generative AI and ChatGPT
5 Execution
7 Related Work - Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Works
3. Textual Generation Experiment
4. Visual Generation Experiment
5. Discussion on Content and Gender Biases in ChatGPT-4O - Artificial Intelligence Policy Framework for Institutions / 2412.02834 / ISBN:https://doi.org/10.48550/arXiv.2412.02834 / Published by ArXiv / on (web) Publishing site
- II. Context for AI
IV. Framework for AI Policy Development - Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
- Introduction and Motivation
- Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
- II. AI Text Generators (AITG)
- Large Language Models in Politics and Democracy: A Comprehensive Survey / 2412.04498 / ISBN:https://doi.org/10.48550/arXiv.2412.04498 / Published by ArXiv / on (web) Publishing site
- 3. LLM Applications in Politics
- From Principles to Practice: A Deep Dive into AI Ethics and Regulations / 2412.04683 / ISBN:https://doi.org/10.48550/arXiv.2412.04683 / Published by ArXiv / on (web) Publishing site
- II AI Practice and Contextual Integrity
- Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
- II AI Practice and Contextual Integrity
- Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
- 2 Methods
3 Results - Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Preliminaries
3 Taxonomy on LLM for Political Science
4 Classical Political Science Functions and Modern Transformations
5 Technical Foundations for LLM Applications in Political Science - Responsible AI in the Software Industry: A Practitioner-Centered Perspective / 2412.07620 / ISBN:https://doi.org/10.48550/arXiv.2412.07620 / Published by ArXiv / on (web) Publishing site
- II. Method
- Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
- 3. Participation: Civic Engagement and Digital Platforms
- Towards Foundation-model-based Multiagent System to Accelerate AI for Social Impact / 2412.07880 / ISBN:https://doi.org/10.48550/arXiv.2412.07880 / Published by ArXiv / on (web) Publishing site
- 5 Testing and Deployment
- Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
- Appendices
- CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
- Introduction
- Reviewing Intelligent Cinematography: AI research for camera-based video production / 2405.05039 / ISBN:https://doi.org/10.48550/arXiv.2405.05039 / Published by ArXiv / on (web) Publishing site
- Appendices
- AI Ethics in Smart Homes: Progress, User Requirements and Challenges / 2412.09813 / ISBN:https://doi.org/10.48550/arXiv.2412.09813 / Published by ArXiv / on (web) Publishing site
- 4 AI Ethics from User Requirements' Perspective
- Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
- Research Phases and AI Tools
- Bots against Bias: Critical Next Steps for Human-Robot Interaction / 2412.12542 / ISBN:https://doi.org/10.1017/9781009386708.023 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Track: Against Bias in Robots - Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
- 2 High-level design of Clio
3 How are people using Claude.ai?
4 Clio for safety
5 Limitations
Appendices - Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets / 2412.14062 / ISBN:https://doi.org/10.48550/arXiv.2412.14062 / Published by ArXiv / on (web) Publishing site
- Abstract
1.0 Introduction
2.0 Trust in Automation
3.0 Conclusions and Areas for Future Research - Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
- III. Autonomous Vehicle Cybersecurirty Attacks
- Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
- 2 Literature Review
3 Methodology
4 Results
5 Discussion
6 Conclusion
Appendices - Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Taxonomy
3 Value Misalignment
4 Robustness to Attack
5 Misuse
6 Autonomous AI Risks
8 Interpretability for LLM Safety
9 Technology Roadmaps / Strategies to LLM Safety in Practice
10 Governance
11 Challenges and Future Directions - Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions / 2412.20564 / ISBN:https://doi.org/10.48550/arXiv.2412.20564 / Published by ArXiv / on (web) Publishing site
- 4 Technological Philosophy and Ethics
5 Conclusion - Autonomous Alignment with Human Value on Altruism through Considerate Self-imagination and Theory of Mind / 2501.00320 / ISBN:https://doi.org/10.48550/arXiv.2501.00320 / Published by ArXiv / on (web) Publishing site
- 3 Discussion
- Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
- Introduction
- INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Preliminaries
4 Method
5 Experiments & Results
6 Conclusion - Curious, Critical Thinker, Empathetic, and Ethically Responsible: Essential Soft Skills for Data Scientists in Software Engineering / 2501.02088 / ISBN:https://doi.org/10.48550/arXiv.2501.02088 / Published by ArXiv / on (web) Publishing site
- III. Method
- Trust and Dependability in Blockchain & AI Based MedIoT Applications: Research Challenges and Future Directions / 2501.02647 / ISBN:https://doi.org/10.48550/arXiv.2501.02647 / Published by ArXiv / on (web) Publishing site
- Ten Challenges & Future Research Directions
- Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / on (web) Publishing site
- 5 Phase 3: Design and Evaluation of the HRI-
Value Compass
Appendices - Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Learning Morality in Machines
3. Designing AI Agents based on Moral Principles - Towards A Litmus Test for Common Sense / 2501.09913 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 7 Mathematical Formulation for LLMs and AI
- Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 4 Method
Supplementary - Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity / 2501.10467 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- II. Historical Evolution of AI Regulation
- Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Discussion and Conclusion
- AI Toolkit: Libraries and Essays for Exploring the Technology and Ethics of AI / 2501.10576 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 4 Highlighting Societal Impacts of AI
- Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 10- Case Studies and Real-world
Applications
- Development of Application-Specific Large Language Models to Facilitate Research Ethics Review / 2501.10741 / ISBN:https://doi.org/10.48550/arXiv.2501.10741 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Generative AI for IRB review
IV. Application-Specific IRB LLMs
V. Discussion: Potential Benefits, Risks, and Replies - Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond / 2501.11457 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 Background
4 Results - Deploying Privacy Guardrails for LLMs: A Comparative Analysis of Real-World Applications
/ 2501.12456 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Introduction
State of the Art
Deployment 1: Data and Model Factory
Comparison of Deployments and Discussion - A Critical Field Guide for Working with Machine Learning Datasets / 2501.15491 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 8. Conclusion
- Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 5. Findings
6. Discussion - The Third Moment of AI Ethics: Developing Relatable and Contextualized Tools / 2501.16954 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 The Challenges of AI Ethics
- A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
/ 2501.18038 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 5. Mapping overlaps between TELUS innovation and acceleration
ethics in the area of privacy
- Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Findings
5 Discussion
Appendices - Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Introduction
Background and Related Work
Jailbreak Evaluation Method
Model Guardrail Enhancemen - DebiasPI: Inference-time Debiasing by Prompt Iteration of a Text-to-Image Generative Model / 2501.18642 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Method
4 Experiments and Results - Agentic AI: Expanding the Algorithmic Frontier of Creative Problem Solving / 2502.00289 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction - Constructing AI ethics narratives based on real-world data: Human-AI collaboration in data-driven visual storytelling / 2502.00637 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Methodology
4 Results - The Human-AI Handshake Framework: A Bidirectional Approach to Human-AI Collaboration / 2502.01493 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Literature Review
- Ethical Considerations for the Military Use of Artificial Intelligence in Visual Reconnaissance / 2502.03376 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 Principles of Ethical AI
3 Use Case 1 - Decision Support for Maritime Surveillance - FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
4. Methodologies
5. Experimental Protocol
6. Results
7. Discussions and Conclusions
Appendices - Open Foundation Models in Healthcare: Challenges, Paradoxes, and Opportunities with GenAI Driven Personalized Prescription / 2502.04356 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- II. Background
III. State-of-the-Art in Open Healthcare LLMs and AIFMs
IV. Leveraging Open LLMs for Prescription: A Case Study - Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Vision Foundation Model Safety
3 Large Language Model Safety
4 Vision-Language Pre-Training Model Safety
5 Vision-Language Model Safety
6 Diffusion Model Safety
7 Agent Safety
8 Open Challenges - The Odyssey of the Fittest: Can Agents Survive and Still Be Good? / 2502.05442 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Introduction
Method - A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and its Emergence in University-Level Academic Writing / 2502.05698 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- ddressing CD in University-Level Academic Writing
- Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Ambiguity and Conflicts in HHH - Integrating Generative Artificial Intelligence in ADRD: A Framework for Streamlining Diagnosis and Care in Neurodegenerative Diseases
/ 2502.06842 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- High Quality Data Collection
- Agentic AI: Expanding the Algorithmic Frontier of Creative Problem Solving / 2502.00289 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Conclusion
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Appendices
- From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine
/ 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
- 4 Language models in medicine
5 Multimodal language models in medicine - Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
- Introduction
Section 2: Distinctive Characteristics of AI and Implications for Relational Norms - AI and the Transformation of Accountability and Discretion in Urban Governance / 2502.13101 / ISBN:https://doi.org/10.48550/arXiv.2502.13101 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. AI's Impact on Bureaucratic Discretion and Accountability: A Conceptual Exploration - Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Risk Factors
Appendices - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
4 Designing TrustGen, a Dynamic Benchmark Platform for Evaluating the Trustworthiness of GenFMs
5 Benchmarking Text-to-Image Models
6 Benchmarking Large Language Models
7 Benchmarking Vision-Language Models
8 Other Generative Models
9 Trustworthiness in Downstream Applications
10 Further Discussion - Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. ML/DL Applications in Surgical Tool Recognition
V. ML/DL Applications in Surgical Training and Simulation - Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
- 2. Methodology
3. Results - Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning / 2502.19798 / ISBN:https://doi.org/10.48550/arXiv.2502.19798 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Method of Experiential Learning in LLMs
Experiments and Results - Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities / 2502.20513 / ISBN:https://doi.org/10.48550/arXiv.2502.20513 / Published by ArXiv / on (web) Publishing site
- 1 Theme and Goals
- An LLM-based Delphi Study to Predict GenAI Evolution / 2502.21092 / ISBN:https://doi.org/10.48550/arXiv.2502.21092 / Published by ArXiv / on (web) Publishing site
- 2 Methods
4 Discussion
5 Conclusions - Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
- 2.Theoretical Framework
3. Methodology
4. Analysis and Results
5. Conclusion - Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions / 2503.00940 / ISBN:https://doi.org/10.48550/arXiv.2503.00940 / Published by ArXiv / on (web) Publishing site
- 4 Main Findings and Themes
Appendices - Digital Dybbuks and Virtual Golems: AI, Memory, and the Ethics of Holocaust Testimony / 2503.01369 / ISBN:https://doi.org/10.48550/arXiv.2503.01369 / Published by ArXiv / on (web) Publishing site
- Permissibility of digital duplicates
The permissibility of digital duplicates in Holocaust remembrance and education - Jailbreaking Generative AI: Empowering Novices to Conduct Phishing Attacks / 2503.01395 / ISBN:https://doi.org/10.48550/arXiv.2503.01395 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Methodology for Launching the Phishing Attack - Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
- III. Core Concepts of Visual Language Modeling
IV. VLM Benchmarking and Evaluations - Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
- 2 Background, History and Resources
3 Personality Computing Systems
4 Discussion and Conclusion - AI Automatons: AI Systems Intended to Imitate Humans / 2503.02250 / ISBN:https://doi.org/10.48550/arXiv.2503.02250 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background & Related Work
3 Conceptual Framework for AI Automatons - Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China / 2503.05773 / ISBN:https://doi.org/10.48550/arXiv.2503.05773 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
5 Case Studies - Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
- 2 LLM Hallucinations in Medicine
3 Causes of Hallucinations
5 Mitigation Strategies
6 Experiments on Medical Hallucination Benchmark
7 Annotations of Medical Hallucination with Clinical Case Records
8 Survey on AI/LLM Adoption and Medical Hallucinations Among Healthcare Professionals and Researchers
10 Conclusion
Appendices - Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance / 2503.06411 / ISBN:https://doi.org/10.48550/arXiv.2503.06411 / Published by ArXiv / on (web) Publishing site
- 5 Proposed Multi-Dimensional Framework for
AI Regulation
7 AI Security, Safety, and Governance: A Sys- temic Perspective - Generative AI in Transportation Planning: A Survey / 2503.07158 / ISBN:https://doi.org/10.48550/arXiv.2503.07158 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Classical Transportation Planning Functions and Modern Transformations
4 Technical Foundations for Generative AI Applications in Transportation Planning
5 Future Directions & Challenges - MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Case Study
4 Taxonomy
5 Methodology
6 Results
7 Discussion
8 Limitations
Appendices - LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions
/ 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Methodology - DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Methodology
4 Discussion
6 Acknowledgement
Appendices - Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
- 1 Motivation
4 Arguments against Transparent CoT - Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy / 2503.14539 / ISBN:https://doi.org/10.48550/arXiv.2503.14539 / Published by ArXiv / on (web) Publishing site
- Introduction
- A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Literature Review
Prompt Engineering: A Double-Edged Sword
Step-Around Prompting: A Research Tool and Potential Threat
Risks of Malicious Use of Step-Around Prompting
Ethics of Step-Around Prompting
Discussion and Recommendations
Conclusion - Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Materials and methods
3 Results
4 Discussion
5 Conclusion - Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI / 2503.16558 / ISBN:https://doi.org/10.48550/arXiv.2503.16558 / Published by ArXiv / on (web) Publishing site
- IV. Challenges and Opportunities
- HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
- 3 Standards and Guidelines
5 Case Study: Automated Triage Service in Health Care - AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use / 2503.20099 / ISBN:https://doi.org/10.48550/arXiv.2503.20099 / Published by ArXiv / on (web) Publishing site
- Introduction
Literature Review
Discussion
Implications - Generative AI and News Consumption: Design Fictions and Critical Analysis / 2503.20391 / ISBN:https://doi.org/10.48550/arXiv.2503.20391 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Discussion - AI Family Integration Index (AFII): Benchmarking a New Global Readiness for AI as Family / 2503.22772 / ISBN:https://doi.org/10.48550/arXiv.2503.22772 / Published by ArXiv / on (web) Publishing site
- 3. Literature Review and Theoretical Framework
- BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models
/ 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
- 2 Proposed Framework - BEATS
4 Limitations - Leveraging LLMs for User Stories in AI Systems: UStAI Dataset / 2504.00513 / ISBN:https://doi.org/10.48550/arXiv.2504.00513 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Study Design
5 Discussion - Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
/ 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
- 4. Taxonomy of AI Privacy and Ethical Incidents
Appendices - Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
- 2 Legal Background
3 From Legal Frameworks to Technological Solutions - AI Regulation and Capitalist Growth: Balancing Innovation, Ethics, and Global Governance / 2504.02000 / ISBN:https://doi.org/10.48550/arXiv.2504.02000 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- A Framework for Developing University Policies on Generative AI Governance: A Cross-national Comparative Study / 2504.02636 / ISBN:https://doi.org/10.48550/arXiv.2504.02636 / Published by ArXiv / on (web) Publishing site
- Findings
- Ethical AI on the Waitlist: Group Fairness Evaluation of LLM-Aided Organ Allocation / 2504.03716 / ISBN:https://doi.org/10.48550/arXiv.2504.03716 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methods
3 Results
4 Conclusion
5 Related Works
Appendix - ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- Appendices
- Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
- A Appendix
- Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 An overview of the generative AI evaluation landscape
3 Why current evaluations approaches are insufficient for assessing interaction harms - AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias / 2407.19655 / ISBN:https://doi.org/10.48550/arXiv.2407.19655 / Published by ArXiv / on (web) Publishing site
- 3 Addressing and Mitigating Unfairness in AI
- A Comprehensive Survey on Integrating Large Language Models with Knowledge-Based Methods / 2501.13947 / ISBN:https://doi.org/10.48550/arXiv.2501.13947 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Overview of LLMs
3. Challenges in implementing LLMs for real-world scenarios
4. Solutions to address LLM challenges
5. Integrating LLMs with knowledge bases - Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation
/ 2502.05151 / ISBN:https://doi.org/10.48550/arXiv.2502.05151 / Published by ArXiv / on (web) Publishing site
- 3 AI Support for Individual Topics and Tasks
4 Ethical Concerns
Appendix - Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future / 2502.08650 / ISBN:https://doi.org/10.48550/arXiv.2502.08650 / Published by ArXiv / on (web) Publishing site
- 2 Responsible Generative AI
3 Explainable AI
6 Discussion - >Publishing site
- Who Evaluates and How?
Other Considerations
How Long is the Evaluation Relevant? - Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Conceptual Underpinnings of Confirmation Bias
3. Confirmation Bias in Generative AI Chatbots
4. Mechanisms of Confirmation Bias in Chatbot Architectures
5. Risks and Ethical Implications
6. Mitigation Strategies
7. Future Research Directions
8. Conclusion - Designing AI-Enabled Countermeasures to Cognitive Warfare / 2504.11486 / ISBN:https://doi.org/10.48550/arXiv.2504.11486 / Published by ArXiv / on (web) Publishing site
- 2.0 Cognitive Warfare in Practice
- Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions
/ 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methods
3 Results
5 Conclusion
Appendix - Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts / 2504.16139 / ISBN:https://doi.org/10.48550/arXiv.2504.16139 / Published by ArXiv / on (web) Publishing site
- IV. Global Regulatory Frameworks
- Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. The paradigm shifts in AI for education: From expert systems to general intelligence
3. Challenges of current AI methods in education: The Elephant in the room
4. Hybrid human-AI methods for responsible AI for education
5. Conclusion - Auditing the Ethical Logic of Generative AI Models / 2504.17544 / ISBN:https://doi.org/10.48550/arXiv.2504.17544 / Published by ArXiv / on (web) Publishing site
- Abstract
Auditing the Ethical Logic of Generative AI
Seven Contemporary LLMs
Prompt Batteries for Assessing Ethical Reasoning
Findings
Reasoning and Chain-of-Thought AI Models
Auditing the Reasoning Models
Discussion - AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Result
5 Discussion
Appendix - The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach
/ 2504.19255 / ISBN:https://doi.org/10.48550/arXiv.2504.19255 / Published by ArXiv / on (web) Publishing site
- Abstract
Implementing the Framework: Core Components and Assessment Protoco
Methodology
Findings
LLM Alignment with Human Values
Future Directions - The EU AI Act in Development Practice: A Pro-justice Approach / 2504.20075 / ISBN:https://doi.org/10.48550/arXiv.2504.20075 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Background and Related Work
3. Applying our Pro-Justice Lens
4. A Pro-Justice Approach to the Act in Practice
5. Limitations, Future Directions, and Conclusions - Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theoretical Foundations of AI Awareness
3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness - AI Awareness / 2504.20084 / ISBN:https://doi.org/10.48550/arXiv.2504.20084 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theoretical Foundations of AI Awareness
3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness - Exploring AI-powered Digital Innovations from A Transnational Governance Perspective: Implications for Market Acceptance and Digital Accountability Accountability / 2504.20215 / ISBN:https://doi.org/10.48550/arXiv.2504.20215 / Published by ArXiv / on (web) Publishing site
- 2.0 Motivations of Developing and Applying AI-powered Digital Innovations
- TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language Models / 2504.20605 / ISBN:https://doi.org/10.48550/arXiv.2504.20605 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Prompt design and dataset generation
3 LLM Evaluation and Comparison with Related Work
4 TF1-EN-3M Dataset Description and Availability
5 Discussion and threats to validity
6 Conclusion
Appendix - Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation / 2504.21574 / ISBN:https://doi.org/10.48550/arXiv.2504.21574 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Adoption and Applications of Genertive AI in Financial Services
3. Emerging Cybersecurity Threats to Financial Institution
4. Mitigation and Secure AI Lifecycle
6. Regulatory Landscape
7. Recommendations for Secure AI Adoption - From Texts to Shields: Convergence of Large Language Models and Cybersecurity / 2505.00841 / ISBN:https://doi.org/10.48550/arXiv.2505.00841 / Published by ArXiv / on (web) Publishing site
- 2 LLM Applications in Network Security
4 Socio-Technical Aspects of LLM and Security
5 LLM Interpretability, Safety, and Security
6 Conclusion - LLM Ethics Benchmark: A Three-Dimensional Assessment System for Evaluating Moral Reasoning in Large Language Models / 2505.00853 / ISBN:https://doi.org/10.48550/arXiv.2505.00853 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Customizing Moral Evaluation for LLMs
4 Proposed Methodology for Testing LLM Moral Reasoning
5 Experimental Results - Securing the Future of IVR: AI-Driven Innovation with Agile Security, Data Regulation, and Ethical AI Integration / 2505.01514 / ISBN:https://doi.org/10.48550/arXiv.2505.01514 / Published by ArXiv / on (web) Publishing site
- II. Traditional IVR Development: Complexity Without Control
- Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
- IV. Applications and Approaches
V. Technological Strengths
VI. Datasets for Emotion Management and Sentiment Analysis
VII. Evaluation Methodologies
VIII. Challenges and Weaknesses
IX. Ethical and Societal Considerations
Conclusion - Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
4 Topical and Toxic Prompt (TTP)
5 HarmFormer
6 Results
7 Conclusion - Navigating Privacy and Trust: AI Assistants as Social Support for Older Adults / 2505.02975 / ISBN:https://doi.org/10.48550/arXiv.2505.02975 / Published by ArXiv / on (web) Publishing site
- 2 Research Background
- GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions / 2505.05523 / ISBN:https://doi.org/10.48550/arXiv.2505.05523 / Published by ArXiv / on (web) Publishing site
- 4. Findings
5. Ethics, Opportunities and Future Directions