_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: textual


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: textual

Bibliography items where occurs: 288
The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
Chapter 2 Technical Performance
Chapter 3 Technical AI Ethics
Appendix


Exciting, Useful, Worrying, Futuristic: Public Perception of Artificial Intelligence in 8 Countries / 2001.00081 / ISBN:https://doi.org/10.48550/arXiv.2001.00081 / Published by ArXiv / on (web) Publishing site
4 Findings
References


The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Study Methodology
4 Evaluation of Ethical AI Principles


Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
3 Methodology
5 Discussion
6 Conclusion


A primer on AI ethics via arXiv- focus 2020-2023 / Kaggle / Published by Kaggle / on (web) Publishing site
Section 3: Current trends 2020-2023


What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
3 Methodology


QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
3 Evaluation


A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / on (web) Publishing site
3. International and National Governance
6. Psychology of Trust


The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
6. Conclusion


Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
Introduction


Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
3 Taxonomy of ethical principles
4 Previous operationalisation of ethical principles
5 Gaps in operationalising ethical principles


A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
Reference


Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust / 2308.09979 / ISBN:https://doi.org/10.48550/arXiv.2308.09979 / Published by ArXiv / on (web) Publishing site
1 Introduction


Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming / 2308.11649 / ISBN:https://doi.org/10.48550/arXiv.2308.11649 / Published by ArXiv / on (web) Publishing site
4 Enhancing User Experience through Creative AI Tools
11 Bias Awareness: Navigating AI-Generated Content in Education


Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
References


Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
V. Market analysis of LLMs and cross-industry use cases
VI. Solution architecture for privacy-aware and trustworthy conversational AI
References


The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
2 The evolution of artificial intelligence: from theory to general capabilities


Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
2 Related Works
4 Experiment


Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Human-centric AI
6 Way forward
7 Conclusion
References


The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. ChatGPT Training Process
4. Methods
5. Discussion


Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 2 Art Data and Human–Machine Interaction in Art Creation
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework
References


The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 The Cambridge Law Corpus
General References


Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
References


ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
Nation-State Advances in AI-driven Information Operations
Theoretical Impact of LLMs on Information Operations


The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
1 Introduction


A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
II. WHAT LLM S CAN DO FOR HEALTHCARE ? FROM FUNDAMENTAL TASKS TO ADVANCED APPLICATIONS
III. FROM PLM S TO LLM S FOR HEALTHCARE
IV. TRAIN AND USE LLM FOR HEALTHCARE
REFERENCES


Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
3 LLMs: Risk and Uncertainty


Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
Evaluation of the Pilot Study


Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
2. The practice of multilateral negotiation and the mechanisms of compromises


Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
3. Clinical Risks


Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
Trust and AI Ethics Principles


In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice / 2309.10215 / ISBN:https://doi.org/10.48550/arXiv.2309.10215 / Published by ArXiv / on (web) Publishing site
References


Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
4. Nascent Extant Work that Falls Within the Ethics of AI Belief


The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
Bibliography


Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
1 Introduction
Appendix A Data Details


Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
5 Related Work


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
3 Investigating the Ethical Values of Large Language Models


AI for Open Science: A Multi-Agent Perspective for Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
5 Why Openness in AI for Science
References


Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
Abstract
III. Ethical Principles for AI Research with Human Participants
V. Conclusion
References


LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
3 The Meaning Model
4 The Moral Model


Towards Effective Paraphrasing for Information Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
Abstract
References


Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
Abstract
2 Overview of ChatGPT and its capabilities
3 Transformers and pre-trained language models
4 Applications of ChatGPT in real-world scenarios
6 Limitations and potential challenges
8 Prompt engineering and generation


Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
2 Related Work


A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
Abstract
II. Introduction
III. Prehistoric prompting: pre NN-era
IV. History of NLP between 2010 and 2015: the pre-attention mechanism era
VI. 2015: birth of the transformer
VII. The second wave in 2017: rise of RL
VIII. The third wave 2018: the rise of transformers
IX. 2019: THE YEAR OF CONTROL
X. 2020-2021: the rise of LLMS
XI. 2022-current: beyond language generation
XII. Conclusions
References


Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
3 Method
4 Findings


How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
3 Methodology
References


Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Chatbot approaches overview: Taxonomy of existing methods
4. ChatGPT
7. Future Research Directions


Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
2 Background
3 Methodology
5 Discussion
7 Conclusion
References


Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
3 Related Work and Discussion
References


Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
3 Methods
4 Findings
References


Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
3 Study Design
4 Findings
5 Discussion
References


GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
II. Background
VI. Future work


Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
Requiring adverse impact statements for RAI research is long overdue
Suggestions for More Meaningful Engagement with the Impact of RAI Research


Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
V. Key points in LLMSEDU


Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
Introduction


Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
OVERVIEW OF SOCIETAL BIASES IN GAI MODELS


From deepfake to deep useful: risks and opportunities through a systematic literature review / 2311.15809 / ISBN:https://doi.org/10.48550/arXiv.2311.15809 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction


Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
3 Transparency and explainability
References


Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
2 Legal Basis of Privacy and Copyright Concerns over Generative AI


Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
5. Results


Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Teach critical usage of AI


Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
12 Large language models and Generative AI


Ethical Considerations Towards Protestware / 2306.10019 / ISBN:https://doi.org/10.48550/arXiv.2306.10019 / Published by ArXiv / on (web) Publishing site
II. Background


Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
A Appendix


The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
Problematizing The View Of GenAI Content As Academic Misconduct
The AI Assessment Scale


Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
III. Research methodology


Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
Abstract
5 Discussion
B Extended Guiding Principles


Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems / 2312.12559 / ISBN:https://doi.org/10.48550/arXiv.2312.12559 / Published by ArXiv / on (web) Publishing site
2 The Reign of Algorithmic Fairness
References


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
3 Problem Formulation
4 Learning Human Morality Judgments


Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
2. Foundations of AI-driven threat intelligence
3. Autonomous threat hunting: conceptual framework
4. State-of-the-art AI techniques in autonomous threat hunting
9. Conclusion
References


Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
1. Introduction
2. LLMs in cognitive and behavioral psychology
5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology
7. Challenges and future directions
8. Conclusion


Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
2. The generation of synthetic data
3. The usage of synthetic data


MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
I. Introduction
IV. System design
V. Evaluation


Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
Discussion


Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
Abstract
III Accounting for bias - 7 Addressing fairness in the banking sector
8 Fairview: an evaluative AI support for addressing fairness


Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
2 Background
3 Method


FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
3 Data Management Challenges in Large Language Models
4 Framework for FAIR Data Principles Integration in LLM Development
Appendices


Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
Abstract
1 The “Triple-Too” problem of AI ethics
2 A shift to user-centered realism in scientific contexts
3 Five specific goals and action-guiding strategies for ethical AI use in research practices
References


A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
2 Related work
4 RAI tool evaluation practices
References
D Summary of themes and codes


Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
3 Detection


Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
4. Findings


Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction


(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
4 Results
5 Discussion


POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
4 The POLARIS framework
5 POLARIS framework application


I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
3 Awareness in LLMs


Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
4 Techniques of Human-Guided Training


User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Current Taxonomy
5 Discussion and Future Research Directions
References


Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
IV. Technological Aspects


Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Enhanced Performance of Free-Formed AI Collectives


The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
4 There is no trustworthy AI without HCI


Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
2 Background
References


Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
2 The Suitability of Generative AI for Newsroom Tasks


FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
5. Discussion


Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
5 Discussion


The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
References


Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction & Motivation
II. Background & Literature Review
III. The AI-Enhanced CTI Processing Pipeline


A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems


AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
3. The Potentials of AGI in Transforming Future Education
References


Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
5. Results
References


Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
3. Analysis


Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
2 Related Work
5 Representational Fairness
References


Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
1 Introduction


Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
3. Findings
Reference


AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
Key AI Ethics Issues
References


Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
Introduction
Methodology
Conclusion


The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
9 A Few Suggestions for a Viable Path Forward
References


Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
2 Existing AI Evaluation Approaches


Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
2 Methods
References


Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
3 Implications of the AI Act


AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
6. Large Language Models (LLMs) - Introduction


Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
2 Applications of Large Language Models in Legal Tasks
References


A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
2 What is a Language Model?
5 Vision Models and Multi-Modal Large Language Models
7 Model Evaluation and Benchmarking


Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
4 Findings


A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
2 Background
3 Methodology
4 Critical Survey
References


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
2 Learning from Feedback
4 Assurance
References


Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations / 2401.13605 / ISBN:https://doi.org/10.48550/arXiv.2401.13605 / Published by ArXiv / on (web) Publishing site
4 Public Opinion on AI Governance
5 Research Questions
7 Discussion


Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
5 Discussion
6 Conclusion


Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Automated Model Cards: Legitimacy via Quantified Objectivity
References


On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
Recommenda5ons


PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
3 Methodology


Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Literature Review
III. Proposed Methodology


Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
4 DECAI: Design-Enhanced Control of AI Systems


Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
Abstract
4 Discussin and Implications


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
4 LLM Lifecycle


Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Case Study #1: Linguistic Features of Emotion
4 Qualifying and Quantifying Ethics


Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Mechanistic Agency: A Common View in AI Practice
3 Volitional Agency: an Alternative Approach


War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
5 Conclusions


Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
3 Method
References


A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
3 Finance
4 Medicine and Healthcare
5 Law
References


Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
4 Interaction Mechanisms
References


A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
Executive Summary
3. A Spectrum of Scenarios of Open Data for Generative AI


Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Data and Methods
4 Results
5 Discussion and conclusions


Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
VI. Case Study


RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
4 Method for Generating Responsible AI Guidelines
6 Discussion


Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
1 Introduction
References


Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
Introduction
Social-interactional harms


Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Between Human Intelligence and Technology: AGI’s Dual Value-Laden Pedigrees
3 The Motley Choices of AGI Discourse
4 Towards Contextualized, Politically Legitimate, and Social Intelligence
5 Conclusion: Politically Legitimate Intelligence


The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
3. Methodology
4. Experiments


Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
References


A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Materials
3 Results
References


Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
Appendix S: Multiple Adversarial LLMs


Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
2 Coding in Thematic Analysis: Manual vs GPT-driven Approaches
4 Validation Using Topic Modeling
7 Conclusion


When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
1 Introduction


Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
5.Quality Metrics Performance


The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
2 Related Literature on Industry’s Engagement in Responsible AI Research


A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
III. Vulnerability Assessment
IV. Network Security
VII. Cyber Security Operations Automation
References


Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
Abstract
Discussion
Conclusion
Additional material


The ethical situation of DALL-E 2 / 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
2 Understanding what can DALL-E 2 actually do


Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
A. Appendix


Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
2 Mitigating (Unfair) Bias
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO


Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
A DVC Dataset: Domestic Violence Cases


Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
5 Discussion and further research


How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
I. Description of Method/Empirical Design
IV. Impact of Alignments on Corporate Investment Forecasts
Figures and tables


Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
4. Desiderata
References


MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
Abstract
4 Experiments


Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
2. Related Work
References


Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
2 Theories and Components of Deception
3 Reductionism & Previous Research in Deceptive AI


An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Theoretical Background
3 Methodology
6 Conclusions and Recommendations
References
Appendix A: Interview questions
Appendix B: Collected data summary


The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Towards Ethical Mitigation: A Proposed Methodology


Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Fairness and AI


Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
4 Results


Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
II. Foundations and Integration of SI and LLM


Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
Appendix C Algorithmic / technical aspects


Justice in Healthcare Artificial Intelligence in Africa / 2406.10653 / ISBN:https://doi.org/10.48550/arXiv.2406.10653 / Published by ArXiv / on (web) Publishing site
1. Beyond Bias and Fairness


Conversational Agents as Catalysts for Critical Thinking: Challenging Design Fixation in Group Design / 2406.11125 / ISBN:https://doi.org/10.48550/arXiv.2406.11125 / Published by ArXiv / on (web) Publishing site
6 POTENTIAL DESIGN CONSIDERATIONS


Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
2 Large Language Model Risks
3 Strategies in Securing Large Language models


Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health / 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
II. RECENT ADVANCEMENTS IN LARGE LANGUAGE MODELS


AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
2 Background
3 Limitations of RLxF


SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
II. UNDERSTANDING GENAI SECURITY


Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
4. AI-driven tax policy to reduce economic inequality: a thought experiment


Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
7. Limitations


Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
A Details of Datasets


Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework


CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
5 Case Studies


Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
IV. Evolution of Affective Robots for Well-Being
References


With Great Power Comes Great Responsibility: The Role of Software Engineers / 2407.08823 / ISBN:https://doi.org/10.48550/arXiv.2407.08823 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work


Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
2 Literature Review


Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
Applications of generative AI in literature reviews and evidence synthesis
Applications of generative AI to health economic modeling
Glossary
References


Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
4 Generative AI and Humans: Risks and Mitigation


Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
2. Key Challenges of Artificial Data
4. Automatic Prompt Generation
Appendices


RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background


Nudging Using Autonomous Agents: Risks and Ethical Considerations / 2407.16362 / ISBN:https://doi.org/10.48550/arXiv.2407.16362 / Published by ArXiv / on (web) Publishing site
3 Examples of Biases


Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
References


Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
2 Background and Literature Review
References


Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
1 Introduction
5 Discussion


Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
2. Related Work
3. Proposed framework
4. Model architecture and training parameters
5. Model Training
6. Results
7. Conclusion and Future Directions
References


Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
I. The Why and How Behind LLMs
II. The Difference Between Academic and Commercial Research


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Methodology & Guidelines
4 Data Preparation
References


Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
II. Generative AI
III. Language Modeling
IV. Challenges of Generative AI and LLMs


VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
I Introduction


Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
3 Uncertainty Ex Machina


Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
3 Visualization Atlas Design Patterns
6 Key Characteristics of Visualization Atlases


Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
References


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
1. Resistance Against AI Does Not Offer Conclusive Reasons for Outright Rejection


CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
2. Background and Related Works
5. Discussion and Future Works


The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
Abstract
Related Work
Methods


Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
2 Promises
References


Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
2 Operationalizable minimum requirements
6 Fairness (F)
8 Safety and Robustness (SR)
10 Transparency and Explainability (T)


Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
III. Generative AI
IV. Attack Methodology


Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
2 Preliminaries
3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions
References


Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
7 Case Studies: Closed-Loop and Semi-Closed-Loop Control


Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
5 Trustworthy and Responsible AI in Human-centric Applications


A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
2 Background
3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field
5 Discussion
References


AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
Background
Methods


DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Prior Benchmarks
3 Data Details
7 Limitations
9 Conclusion & Future Work
10 Appendix


On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
2 A Creative Journey from Ada Lovelace to Foundation Models


LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
9 Limitations


Views on AI aren't binary -- they're plural / 2312.14230 / ISBN:https://doi.org/10.48550/arXiv.2312.14230 / Published by ArXiv / on (web) Publishing site
References


Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
2 Foundation Models
5 Data Quantity
6 Data Annotation
8 Performance Evaluation


Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Hate Speech
3 Methodology
4 Challenges
5 Future Directions
References


Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
Language models as human participants
Six fallacies that misinterpret language models
Using language models to simulate roles and model cognitive processes
References


Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
II. Conceptualization and frameworks
IV. Findings and Resultant Themes
References


How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
4 Results
5 Open Challenges and Future Research Directions (RQ5)


Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
1 Related Work
5 Discussion


ValueCompass: A Framework of Fundamental Values for Human-AI Alignment / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
2 Related Work
3 Designing ValueCompass: A Comprehensive Framework for Defining Fundamental Values in Alignment
4 Operationalizing ValueCompass: Methods to Measure Value Alignment of Humans and AI
5 Findings with ValueCompass: The Status Quo of Human-AI Value Alignment


Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools / 2409.11489 / ISBN:https://doi.org/10.48550/arXiv.2409.11489 / Published by ArXiv / on (web) Publishing site
2 Ethical Considerations in AI-Enabled Optimization
3 Case Studies in AI-Enabled Optimization


Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
References


Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations / 2409.13869 / ISBN:https://doi.org/10.48550/arXiv.2409.13869 / Published by ArXiv / on (web) Publishing site
Data and Results


GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
7 Discussion
A Appendix


Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
4 Brain-inspired Information processing
5 Challenges and Perspectives in Human-Level AI Development


Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
3. Methodology


Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
I. Introduction


Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
4 AI safety issues related to large language models in medicine


The Gradient of Health Data Privacy / 2410.00897 / ISBN:https://doi.org/10.48550/arXiv.2410.00897 / Published by ArXiv / on (web) Publishing site
Abstract
2 Background and Related Work
3 The Health Data Privacy Gradient
4 Technical Implementation of a Privacy Gradient Model
References


Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
I. Introduction


AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
References


Investigating Labeler Bias in Face Annotation for Machine Learning / 2301.09902 / ISBN:https://doi.org/10.48550/arXiv.2301.09902 / Published by ArXiv / on (web) Publishing site
2. Related Work


The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
IV. Results


Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems / 2410.10284 / ISBN:https://doi.org/10.48550/arXiv.2410.10284 / Published by ArXiv / on (web) Publishing site
III. Research Methodology
IV. Challenges of AWS


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
A. Appendix


Study on the Helpfulness of Explainable Artificial Intelligence / 2410.11896 / ISBN:https://doi.org/10.48550/arXiv.2410.11896 / Published by ArXiv / on (web) Publishing site
2 Measuring Explainability
References


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
4 Cultural safety dataset


How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
Executive Summary
7 Limitations


Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
8 Ethics Considerations and Compliance with the Open Science Policy


Confrontation or Acceptance: Understanding Novice Visual Artists' Perception towards AI-assisted Art Creation / 2410.14925 / ISBN:https://doi.org/10.48550/arXiv.2410.14925 / Published by ArXiv / on (web) Publishing site
5 RQ1: Evolution of the Opinions Towards AI Tools


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
II. Background and Concepts
III. Jailbreak Attack Methods and Techniques
V. Evaluation and Benchmarking
VI. Research Gaps and Future Directions
VII. Conclusion


Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
Introduction
References


Trustworthy XAI and Application / 2410.17139 / ISBN:https://doi.org/10.48550/arXiv.2410.17139 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Applications of Trustworthy XAI


Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
Task Formulation
Prompt engineering


The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Fundamentals of Diffusion Models and Detection Challenges
IV. Detection Methods Based on Textual and Multimodal Analysis for Text-to-Image Models
V. Datasets and Benchmarks
VIII. Research Gaps and Future Directions


My Replika Cheated on Me and She Liked It: A Taxonomy of Algorithmic Harms in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
3 Methodology
5 Discussion


The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
2 Related Work


Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
1 Introduction
7 Discussion


Ethical Statistical Practice and Ethical AI / 2410.22475 / ISBN:https://doi.org/10.48550/arXiv.2410.22475 / Published by ArXiv / on (web) Publishing site
1. Introduction


Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
Introduction
Defining Key Concepts
Theoretical Framework
Discussion


Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations / 2410.23432 / ISBN:https://doi.org/10.48550/arXiv.2410.23432 / Published by ArXiv / on (web) Publishing site
4 Recommendations
References


The Transformative Impact of AI and Deep Learning in Business: A Literature Review / 2410.23443 / ISBN:https://doi.org/10.48550/arXiv.2410.23443 / Published by ArXiv / on (web) Publishing site
II. Background and Theoretical Foundations of AI and Deep Learning


Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
2 Literature Review
5 Discussion


Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
References


Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
2 Related Work
3 Methods


Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
Abstract
1. Introduction
3. Method


A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
3. XR Applications: Expanding Multimodal Interactions Across Domains
4. Potential Risks and Ethical Challenges of XR and the Metaverse


I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work
Appendices


A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
II. Background and Technology
III. From General to Medical-Specific LLMs
IV. Improving Algorithms for Med-LLMs
V. Applying Medical LLMs
References


Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries / 2409.12197 / ISBN:https://doi.org/10.48550/arXiv.2409.12197 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
References


Persuasion with Large Language Models: a Survey / 2411.06837 / ISBN:https://doi.org/10.48550/arXiv.2411.06837 / Published by ArXiv / on (web) Publishing site
3 Factors Influencing Persuasiveness


Collaborative Participatory Research with LLM Agents in South Asia: An Empirically-Grounded Methodological Initiative and Agenda from Field Evidence in Sri Lanka / 2411.08294 / ISBN:https://doi.org/10.48550/arXiv.2411.08294 / Published by ArXiv / on (web) Publishing site
3 Proposed LLM4Participatory Research Framework
5 Discussion and Future Agenda


Human-Centered AI Transformation: Exploring Behavioral Dynamics in Software Engineering / 2411.08693 / ISBN:https://doi.org/10.48550/arXiv.2411.08693 / Published by ArXiv / on (web) Publishing site
IV. Results


Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Related Work
6 Content


Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
II. Problem Statement: the Interface Dilemma
III. History and Evolution of User Interfaces
IV. Current App Frameworks and AI Integration
V. Multimodal Interaction
VI. Limitations, Challenges, and Future Directions for AI-Driven Interfaces


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
References
1. Introduction
3. Extrinsic Bias
4. Bias Evaluation


Framework for developing and evaluating ethical collaboration between expert and machine / 2411.10983 / ISBN:https://doi.org/10.48550/arXiv.2411.10983 / Published by ArXiv / on (web) Publishing site
2. Method


GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
2 Background
4 Results
References


Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
III. P2VAD with non -Identifiable Elements
V. Edge-Cloud Intelligencve Empowered P2VAD
VII. Discussion


Advancing Transformative Education: Generative AI as a Catalyst for Equity and Innovation / 2411.15971 / ISBN:https://doi.org/10.48550/arXiv.2411.15971 / Published by ArXiv / on (web) Publishing site
2 Theoretical Framework
3 Literature Review


AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
2 Generative AI and ChatGPT
6 Discussion: Benefits, Risks and Limitations


Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Textual Generation Experiment
4. Visual Generation Experiment


Human-centred test and evaluation of military AI / 2412.01978 / ISBN:https://doi.org/10.48550/arXiv.2412.01978 / Published by ArXiv / on (web) Publishing site
Full Summary


Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice / 2412.03576 / ISBN:https://doi.org/10.48550/arXiv.2412.03576 / Published by ArXiv / on (web) Publishing site
Emerging Ideas


Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
Abstract
I. Introduction
III. Retrieval Augmented Generation (RAG)
IV. Tools and Methods for RAG
VIII. Future Work and Conclusion


Large Language Models in Politics and Democracy: A Comprehensive Survey / 2412.04498 / ISBN:https://doi.org/10.48550/arXiv.2412.04498 / Published by ArXiv / on (web) Publishing site
References


Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / on (web) Publishing site
Abstract
I Introduction
II AI Practice and Contextual Integrity
III AI Ethics and the notion of AI as uncharted moral territory
IV Integrative AI Ethics
V Conclusion
VI References


Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
2 Methods
3 Results


Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Preliminaries
3 Taxonomy on LLM for Political Science
4 Classical Political Science Functions and Modern Transformations
5 Technical Foundations for LLM Applications in Political Science
6 Future Directions & Challenges
References


Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
4. Representation: Digital and AI Technologies in Modern Electoral Processes


Towards Foundation-model-based Multiagent System to Accelerate AI for Social Impact / 2412.07880 / ISBN:https://doi.org/10.48550/arXiv.2412.07880 / Published by ArXiv / on (web) Publishing site
5 Testing and Deployment


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
Appendices


CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
Establishing a framework for interactions in an autonomous digital city


Reviewing Intelligent Cinematography: AI research for camera-based video production / 2405.05039 / ISBN:https://doi.org/10.48550/arXiv.2405.05039 / Published by ArXiv / on (web) Publishing site
Appendices


AI Ethics in Smart Homes: Progress, User Requirements and Challenges / 2412.09813 / ISBN:https://doi.org/10.48550/arXiv.2412.09813 / Published by ArXiv / on (web) Publishing site
3 Smart Home Technologies and AI Ethics
4 AI Ethics from User Requirements' Perspective
5 AI Ethics from Technology's Perspective
References


Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
Research Phases and AI Tools
Discussion


On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
III. Results


Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
2 High-level design of Clio


User-Generated Content and Editors in Games: A Comprehensive Survey / 2412.13743 / ISBN:https://doi.org/10.48550/arXiv.2412.13743 / Published by ArXiv / on (web) Publishing site
References


Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
V. Challenges and Suggestions


Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review / 2412.16369 / ISBN:https://doi.org/10.48550/arXiv.2412.16369 / Published by ArXiv / on (web) Publishing site
III. Results
IV. Discussion


Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
2 Literature Review
4 Results
5 Discussion
Appendices


Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
7 Agent Safety
8 Interpretability for LLM Safety


Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions / 2412.20564 / ISBN:https://doi.org/10.48550/arXiv.2412.20564 / Published by ArXiv / on (web) Publishing site
4 Technological Philosophy and Ethics
References


Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
Introduction
II. Methodology
III. Qualitative Findings and Resultant Themes
IV. Quantitative Findings and Resultant Patterns
V. Discussion and Synthesis


Curious, Critical Thinker, Empathetic, and Ethically Responsible: Essential Soft Skills for Data Scientists in Software Engineering / 2501.02088 / ISBN:https://doi.org/10.48550/arXiv.2501.02088 / Published by ArXiv / on (web) Publishing site
III. Method
V. Discussions


Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / on (web) Publishing site
4 Phase 2: Focus Groups
5 Phase 3: Design and Evaluation of the HRI- Value Compass
6 General Discussion and Conclusion
References
Appendices