_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: subtle


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: subtle

Bibliography items where occurs: 159
On the Current and Emerging Challenges of Developing Fair and Ethical AI Solutions in Financial Services / 2111.01306 / ISBN:https://doi.org/10.48550/arXiv.2111.01306 / Published by ArXiv / on (web) Publishing site
3 Practical Challengesof Ethical AI


From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
Applications in Military Versus Healthcare


The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
5 Challenges


Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
4 Human-centric AI


The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
4. Methods


Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 2 - 2 Motion Caputer Technologies and Motion Data
Part 5 Ethical AI and Machine Artist
Part 5 - 1 Authorship and Ownership of AI-generated Works of Artt


FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
6. Robustness - For Reliable AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging


Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
IV. Attack Surfaces


Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Pilot Study: Text SERPs with Ads
Evaluation of the Pilot Study
Ethics of GEnerating Native Ads


Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
3. The liberal-sovereigntist multiplicity


In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice / 2309.10215 / ISBN:https://doi.org/10.48550/arXiv.2309.10215 / Published by ArXiv / on (web) Publishing site
2 Definitions of Terms


FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
COMPETING INTERESTS


Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
5 Related Work


Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
Abstract
2 AI feedback on specific problematic AI traits
3 Generalization from a Simple Good for Humanity Principle
6 Discussion


The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
5 Discussion
Annexed tables


Systematic AI Approach for AGI: Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
5 System Design for AI Alignment
7 Pursuit of the “Universal AGI Architecture” versus “Systematic AGI”


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
3 Investigating the Ethical Values of Large Language Models


Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
4 The Causal Condition


Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
VII. Conclusions


She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite


Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
Requiring adverse impact statements for RAI research is long overdue


Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
Analytical Framework


Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
V. Technical defense mechanisms


Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
Culturally responsive AI – current landscape


Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
1. Introduction
3. Autonomous threat hunting: conceptual framework
8. Future directions and emerging trends


AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
IV. Results


Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
Discussion


Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
Contents / List of figures / List of tables / Acronyms
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation


Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
2 Generation
3 Detection
5 Discussion


Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
2. Related literature


(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
5 Discussion


POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
2 Background


Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Background and Related Work
V. Processual Elements
VI. Human Dynamics
VII. Discussions
VIII. Conclusion


A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
3 Effective Human-AI Joint Systems
5 Applications


AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
Key AI Ethics Issues


Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
Results


The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
7 Challenges and Limitations


Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
5 Discussion


Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
3 Implications of the AI Act


A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
4 Critical Survey


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
2 Learning from Feedback


Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
3 Generative Ghosts: A Design Space


On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
Sustainability considera5ons


PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
2 Background and Related Work


Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
IV. Results and Discussion


Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
6 Posthumanism


Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Method
4 Discussin and Implications
5 Limitations
6 Conclusion & Future Work


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
3 LLM Infrastructure
4 LLM Lifecycle


Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
2 Qualifying and Quantifying Emotions


A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
6 Ethics


Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines / 2405.03153 / ISBN:https://doi.org/10.48550/arXiv.2405.03153 / Published by ArXiv / on (web) Publishing site
Abstract
2 Related Work


Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
5 Discussion and conclusions


Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
5 Analyses of the Design Process


Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
Social-interactional harms


Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
3 The Motley Choices of AGI Discourse
4 Towards Contextualized, Politically Legitimate, and Social Intelligence


The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
1. Introduction
5. Conclusion


A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
1 Introduction


Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
3 Quantitative Models of Emotions, Behaviors, and Ethics


Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
4 Validation Using Topic Modeling


A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
IV. Network Security


How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
III. Impact of Alignment on LLMs’ Risk Preferences


Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
5 Conclusion


Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
4 Results


Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
A brief history of AI and generative AI


Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
5 Discussion: Grappling with the Scale and Interconnectedness of Foundation Models


Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
Abstract
2. Deepfake Detection


Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
1 Introduction


Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
III. A Guide for Data in LLM Research


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
8 Model Evaluation


Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
IV. Challenges of Generative AI and LLMs


Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
I. AI and the Federal Arbitration ACt


CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
2. Background and Related Works


Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
6 Discussions of Current Studies


Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
4. Risks and Caveats


DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
1 Introduction


Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Hate Speech
4 Challenges
5 Future Directions


Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
2 Methodology
5 Discussion


GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
5 User Study Methodology
6 User Study Results
7 Discussion
A Appendix


Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
IV. Human-Level AI and Challenges/Perspectives


Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
7 Limitations & Considerations (RQ3)


Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
VIII. Discussion and Conclusions


The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
V. Discussion


How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
5 Mechanisms of Industry Influence in US AI Policy
8 Conclusion


Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
7 Conclusion and Limitations


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
IV. Defense Mechanisms Against Jailbreak Attacks


The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Fundamentals of Diffusion Models and Detection Challenges
III. Detection Methods Based on Image Analysis
IV. Detection Methods Based on Textual and Multimodal Analysis for Text-to-Image Models
VI. Evaluation Metrics


The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
4 Results
5 Discussion


Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
5 Discussion


Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
5 Discussion


A Comprehensive Review of Multimodal XR Applications, Risks, and Ethical Challenges in the Metaverse / 2411.04508 / ISBN:https://doi.org/10.48550/arXiv.2411.04508 / Published by ArXiv / on (web) Publishing site
2. Multimodal Interaction Across the Virtual Continuum
3. XR Applications: Expanding Multimodal Interactions Across Domains


A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions / 2406.03712 / ISBN:https://doi.org/10.48550/arXiv.2406.03712 / Published by ArXiv / on (web) Publishing site
II. Background and Technology


Collaborative Participatory Research with LLM Agents in South Asia: An Empirically-Grounded Methodological Initiative and Agenda from Field Evidence in Sri Lanka / 2411.08294 / ISBN:https://doi.org/10.48550/arXiv.2411.08294 / Published by ArXiv / on (web) Publishing site
4 Field Work and Implementation Insights


Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
3 Transformer Architecture


Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Bias Evaluation
7. Conclusion


Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
4 Classical Political Science Functions and Modern Transformations
5 Technical Foundations for LLM Applications in Political Science
6 Future Directions & Challenges


Responsible AI in the Software Industry: A Practitioner-Centered Perspective / 2412.07620 / ISBN:https://doi.org/10.48550/arXiv.2412.07620 / Published by ArXiv / on (web) Publishing site
III. Findings


CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
Discussion


Intelligent Electric Power Steering: Artificial Intelligence Integration Enhances Vehicle Safety and Performance / 2412.08133 / ISBN:https://doi.org/10.48550/arXiv.2412.08133 / Published by ArXiv / on (web) Publishing site
IV. Effects on Vehicle Safety Through AI in EPS


Bots against Bias: Critical Next Steps for Human-Robot Interaction / 2412.12542 / ISBN:https://doi.org/10.1017/9781009386708.023 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Track: Against Bias in Robots


Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets / 2412.14062 / ISBN:https://doi.org/10.48550/arXiv.2412.14062 / Published by ArXiv / on (web) Publishing site
2.0 Trust in Automation


Autonomous Vehicle Security: A Deep Dive into Threat Modeling / 2412.15348 / ISBN:https://doi.org/10.48550/arXiv.2412.15348 / Published by ArXiv / on (web) Publishing site
III. Autonomous Vehicle Cybersecurirty Attacks


Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
5 Misuse
6 Autonomous AI Risks


Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions / 2412.20564 / ISBN:https://doi.org/10.48550/arXiv.2412.20564 / Published by ArXiv / on (web) Publishing site
2 Trust in Human-Machine Interaction


Implications of Artificial Intelligence on Health Data Privacy and Confidentiality / 2501.01639 / ISBN:https://doi.org/10.48550/arXiv.2501.01639 / Published by ArXiv / on (web) Publishing site
AI and Machine Learning in Healthcare


Trust and Dependability in Blockchain & AI Based MedIoT Applications: Research Challenges and Future Directions / 2501.02647 / ISBN:https://doi.org/10.48550/arXiv.2501.02647 / Published by ArXiv / on (web) Publishing site
In The Field: Perspectives from Patients and Health Practitioners


Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv.2501.10453 / Published by ArXiv / on (web) Publishing site
2 Results and Discussion
4 Method


Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv.2501.10685 / Published by ArXiv / on (web) Publishing site
2- Technological Foundations
5- Customer Communication and Assistance
7- Social Media and Community Engagement


A Critical Field Guide for Working with Machine Learning Datasets / 2501.15491 / ISBN:https://doi.org/10.48550/arXiv.2501.15491 / Published by ArXiv / on (web) Publishing site
1. Introduction to Machine Learning Datasets
6. The Dataset Lifecycle


Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
4 Findings
5 Discussion


Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv.2501.18632 / Published by ArXiv / on (web) Publishing site
Model Guardrail Enhancemen
Limitations and Future Work


Constructing AI ethics narratives based on real-world data: Human-AI collaboration in data-driven visual storytelling / 2502.00637 / ISBN:https://doi.org/10.48550/arXiv.2502.00637 / Published by ArXiv / on (web) Publishing site
2 Related Work


FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv.2502.03826 / Published by ArXiv / on (web) Publishing site
Abstract


Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
6 Diffusion Model Safety
8 Open Challenges


Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv.2502.06059 / Published by ArXiv / on (web) Publishing site
2 HHH Principle


Integrating Generative Artificial Intelligence in ADRD: A Framework for Streamlining Diagnosis and Care in Neurodegenerative Diseases / 2502.06842 / ISBN:https://doi.org/10.48550/arXiv.2502.06842 / Published by ArXiv / on (web) Publishing site
High Quality Data Collection


Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
Section 2: Distinctive Characteristics of AI and Implications for Relational Norms


Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
2 Failure Modes
3 Risk Factors


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
5 Benchmarking Text-to-Image Models
10 Further Discussion


Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
II. Background and Challenges
IV. ML/DL Applications in Surgical Workflow Analysis
V. ML/DL Applications in Surgical Training and Simulation


Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
4. Analysis and Results


Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence / 2503.00164 / ISBN:https://doi.org/10.48550/arXiv.2503.00164 / Published by ArXiv / on (web) Publishing site
4 Agentic AI and Frontier AI in Cybersecu- rity
6 Threat Intelligence Feeds and Sources in the Era of Frontier AI


Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
2 LLM Hallucinations in Medicine
4 Detection and Evaluation of Medical Hallucinations
7 Annotations of Medical Hallucination with Clinical Case Records
9 Regulatory and Legal Considerations for AI Hallucinations in Healthcare


Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance / 2503.06411 / ISBN:https://doi.org/10.48550/arXiv.2503.06411 / Published by ArXiv / on (web) Publishing site
6 Case Studies and Domain Applications
7 AI Security, Safety, and Governance: A Sys- temic Perspective


Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework / 2503.09969 / ISBN:https://doi.org/10.48550/arXiv.2503.09969 / Published by ArXiv / on (web) Publishing site
Abstract


LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3 Mini Across Chronic Health Conditions / 2503.10486 / ISBN:https://doi.org/10.48550/arXiv.2503.10486 / Published by ArXiv / on (web) Publishing site
1 Introduction
4 Results


DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
Appendices


Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
4 Arguments against Transparent CoT


A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
Step-Around Prompting: A Research Tool and Potential Threat


Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
4 Discussion


BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models / 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
2 Proposed Framework - BEATS


Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
3 From Legal Frameworks to Technological Solutions
4 Legal cases as Use Cases for Attribution Methods


Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini / 2504.06436 / ISBN:https://doi.org/10.48550/arXiv.2504.06436 / Published by ArXiv / on (web) Publishing site
5. Conclusion


Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
1 Introduction
3 Why current evaluations approaches are insufficient for assessing interaction harms


A Comprehensive Survey on Integrating Large Language Models with Knowledge-Based Methods / 2501.13947 / ISBN:https://doi.org/10.48550/arXiv.2501.13947 / Published by ArXiv / on (web) Publishing site
4. Solutions to address LLM challenges


Confirmation Bias in Generative AI Chatbots: Mechanisms, Risks, Mitigation Strategies, and Future Research Directions / 2504.09343 / ISBN:https://doi.org/10.48550/arXiv.2504.09343 / Published by ArXiv / on (web) Publishing site
1. Introduction
4. Mechanisms of Confirmation Bias in Chatbot Architectures
8. Conclusion


Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions / 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / on (web) Publishing site
Appendix


AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
4 Result


The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach / 2504.19255 / ISBN:https://doi.org/10.48550/arXiv.2504.19255 / Published by ArXiv / on (web) Publishing site
Findings


Balancing Creativity and Automation: The Influence of AI on Modern Film Production and Dissemination / 2504.19275 / ISBN:https://doi.org/10.48550/arXiv.2504.19275 / Published by ArXiv / on (web) Publishing site
1. Introduction


TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language Models / 2504.20605 / ISBN:https://doi.org/10.48550/arXiv.2504.20605 / Published by ArXiv / on (web) Publishing site
5 Discussion and threats to validity


Federated learning, ethics, and the double black box problem in medical AI / 2504.20656 / ISBN:https://doi.org/10.48550/arXiv.2504.20656 / Published by ArXiv / on (web) Publishing site
6 Further ethical concerns


Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation / 2504.21574 / ISBN:https://doi.org/10.48550/arXiv.2504.21574 / Published by ArXiv / on (web) Publishing site
3. Emerging Cybersecurity Threats to Financial Institution


From Texts to Shields: Convergence of Large Language Models and Cybersecurity / 2505.00841 / ISBN:https://doi.org/10.48550/arXiv.2505.00841 / Published by ArXiv / on (web) Publishing site
5 LLM Interpretability, Safety, and Security
6 Conclusion


Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
V. Technological Strengths
VIII. Challenges and Weaknesses


Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
6 Results


AI and Generative AI Transforming Disaster Management: A Survey of Damage Assessment and Response Techniques / 2505.08202 / ISBN:https://doi.org/10.48550/arXiv.2505.08202 / Published by ArXiv / on (web) Publishing site
III Data Modalities and Generative AI Applications
IV Misinformation and Adversarial Attacks


Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / on (web) Publishing site
V Conclusion and Future Work


WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models / 2505.09595 / ISBN:https://doi.org/10.48550/arXiv.2505.09595 / Published by ArXiv / on (web) Publishing site
2 Background
3 Methodology and System Design
6 Discussion and Potential Limitations


AI LEGO: Scaffolding Cross-Functional Collaboration in Industrial Responsible AI Practices during Early Design Stages / 2505.10300 / ISBN:https://doi.org/10.48550/arXiv.2505.10300 / Published by ArXiv / on (web) Publishing site
Appendices


Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies / 2505.15851 / ISBN:https://doi.org/10.48550/arXiv.2505.15851 / Published by ArXiv / on (web) Publishing site
2 Moral Exercises in the Context of Human Oversight


SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use / 2505.17332 / ISBN:https://doi.org/10.48550/arXiv.2505.17332 / Published by ArXiv / on (web) Publishing site
1 Introduction


Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods / 2505.17870 / ISBN:https://doi.org/10.48550/arXiv.2505.17870 / Published by ArXiv / on (web) Publishing site
2 Conceptual Framework: Model Immunization via Quarantined Falsehoods
Appendix


Making Sense of the Unsensible: Reflection, Survey, and Challenges for XAI in Large Language Models Toward Human-Centered AI / 2505.20305 / ISBN:https://doi.org/10.48550/arXiv.2505.20305 / Published by ArXiv / on (web) Publishing site
2 Beyond Transparency: Why Explainability Is Essential for LLMs?
3 What Is XAI in the Context of LLMs?


Can we Debias Social Stereotypes in AI-Generated Images? Examining Text-to-Image Outputs and User Perceptions / 2505.20692 / ISBN:https://doi.org/10.48550/arXiv.2505.20692 / Published by ArXiv / on (web) Publishing site
6 Discussion


Simulating Ethics: Using LLM Debate Panels to Model Deliberation on Medical Dilemmas / 2505.21112 / ISBN:https://doi.org/10.48550/arXiv.2505.21112 / Published by ArXiv / on (web) Publishing site
3 Methodology
5. Discussion


Are Language Models Consequentialist or Deontological Moral Reasoners? / 2505.21479 / ISBN:https://doi.org/10.48550/arXiv.2505.21479 / Published by ArXiv / on (web) Publishing site
Appendix


Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking / 2505.23930 / ISBN:https://doi.org/10.48550/arXiv.2505.23930 / Published by ArXiv / on (web) Publishing site
Methods
Discussion


Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / on (web) Publishing site
4 Findings


DeepSeek in Healthcare: A Survey of Capabilities, Risks, and Clinical Applications of Open-Source Large Language Models / 2506.01257 / ISBN:https://doi.org/10.48550/arXiv.2506.01257 / Published by ArXiv / on (web) Publishing site
Clinical Applications
Strenghts and Limitations of DeepSeek-R1