if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: openai
Bibliography items where occurs: 274
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Chapter 2 Technical Performance
Appendix - A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
- 1. Problems with AI
- GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
- Materials and Methods
- A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / on (web) Publishing site
- 5. AI Literacy and Governance by Citizen
- The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- 4. Ethical Implications of AI Value Chains
- Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
- 2 Clarifying Terminologies of Article-5: Insights from Behavioral Economics and Psychology
- Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Introduction
System-role - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- 2 Background
- The Future of ChatGPT-enabled Labor Market: A Preliminary Study / 2304.09823 / ISBN:https://doi.org/10.48550/arXiv.2304.09823 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Results - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
8 Regulations and Ethical Use - Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- 4 Discussion
5 A vision of AI-augmented pen-testing - Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Comprehensive review of state-of-the-art LLMs - Artificial Intelligence in Career Counseling: A Test Case with ResumAI / 2308.14301 / ISBN:https://doi.org/10.48550/arXiv.2308.14301 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experiment - The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related work
3. ChatGPT Training Process
4. Methods
5. Discussion
6. Conclusion - Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Part 3 - 3 Comparison with Generative Models
Acknowledgment - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experiments
General References
F Evaluation of GPT Models - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Scientific Expertise, Social Media and Regulatory Capture - Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
- Introduction
Pilot Study: Text SERPs with Ads - The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
- 4. A Holistic Framework
- An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Datasets and Methods - Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 4. Nascent Extant Work that Falls Within the Ethics of AI Belief
- Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- 4 Process Patterns
- AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Implications for AI Governance and Policy - Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Investigating the Ethical Values of Large Language Models - Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Overview of ChatGPT and its capabilities
4 Applications of ChatGPT in real-world scenarios
6 Limitations and potential challenges - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- II. Sources of bias in AI
- A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- X. 2020-2021: the rise of LLMS
- She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Experiments - Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. ChatGPT
5. Applications - Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Proposed Process
Acknowledgments and Disclosure of Funding - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- II. Background
- The Rise of Creative Machines: Exploring the Impact of Generative AI / 2311.13262 / ISBN:https://doi.org/10.48550/arXiv.2311.13262 / Published by ArXiv / on (web) Publishing site
- II. Extent and impact of generative AI
- Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Introduction
- Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- Introduction
Overview of Societal Biases in GAI Models - Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
- I. Very slowly then all-at-once
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 5 Responsiblity, accountability, and regulations
- Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
- III. The rise of large AI models
- Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Research questions - Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. The pitfalls in detecting generative AI output
Acknowledgements - Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- 6 Measuring intelligence
9 Augmenting human intelligence
12 Large language models and Generative AI - Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 6 Related Works
Appendix D Details of Benchmark Results - Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
- ...
- The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Introduction
- Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- Culturally responsive AI – current landscape
- Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- 3 Materials and methods
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- 4 Learning Human Morality Judgments
Acknowledgments and Disclosure of Funding - Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- I. Introduction
V. Discussion - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 2. LLMs in cognitive and behavioral psychology
7. Challenges and future directions - Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 2. The generation of synthetic data
- Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
- 2 Background
- FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 FAIR Data Principles: Theoretical Background and Significance
Appendices - Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
- 3. Use cases representing different image data types and their challenges
and status for sharing
- Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 3 Five specific goals and action-guiding strategies for ethical AI use in research practices
- Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 2 Generation
- Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 4. Findings
- Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
- 2 Establishing the novel aspect of AI as a crossover technology
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 3 Methods: case-based expert deliberation
7 Acknowledgement - Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Introduction
Discussion - How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- B Experimental Settings & Results
- Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- 2 Methods
- Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- IV. Technological Aspects
- Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Case Study Under the Copyleft - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- 2. Emergence of Free-Formed AI Collectives
3. Enhanced Performance of Free-Formed AI Collectives - Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Suitability of Generative AI for Newsroom Tasks - FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 3. Methods
- A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- 2 AI Model Improvements with Human-AI Teaming
- AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
- 2. What is AGI
- Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Cyber Offense
5 Implications of Generative AI in Social, Legal, and Ethical Domains - Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Experiment - AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
- Introduction
AI Ethics Development Phases Based on Keyword Analysis
Key AI Ethics Issues - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Results
- The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- 4 AI Regulation: Current Global Landscape
5 Risk - AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
- 6. Large Language Models (LLMs) - Introduction
- Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- 2 Applications of Large Language Models in Legal Tasks
- A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 What is a Language Model?
3 Proprietary vs. Open Source LLMs
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models - Balancing Progress and Responsibility: A Synthesis of Sustainability Trade-Offs of AI-Based Systems / 2404.03995 / ISBN:https://doi.org/10.48550/arXiv.2404.03995 / Published by ArXiv / on (web) Publishing site
- II. Background and Related Work
- Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- Introduction
A Primer - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Learning from Feedback
4 Assurance
5 Governance - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 A Geo-Political AI Risk Taxonomy - The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Audit the process, not just the product
3 3 Governance for safety - Who Followed the Blueprint? Analyzing the Responses of U.S. Federal Agencies to the Blueprint for an AI Bill of Rights / 2404.19076 / ISBN:https://doi.org/10.48550/arXiv.2404.19076 / Published by ArXiv / on (web) Publishing site
- Findings
- Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
- 4 Findings
5 Discussion - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Medicine and Healthcare - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
- 2. Current State of AWS
- A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. A Spectrum of Scenarios of Open Data for Generative AI - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
- 2
Related Work
5 Analyses of the Design Process - Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 The Motley Choices of AGI Discourse
A Dimensions of AGI: a Summary - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- 4. Experiments
- Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Results - Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Coding in Thematic Analysis: Manual vs GPT-driven Approaches
6 OpenAI Updates on Policies and Model Capabilities: Implications for Thematic Analysis - When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI
/ 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
- 3 RQ2: What Technical Strategies Can Be
Employed to Mitigate the Negative Consequences
of AI Autophagy?
- The ethical situation of DALL-E 2
/ 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Following the RRI, (Responsible research innovation) principles
7 Conclusion - The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Discussion - Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
A. Appendix - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
4 Comparative Analysis of Pre-Trained Models.
5 Discussion and further research - How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- I. Description of Method/Empirical Design
III. Impact of Alignment on LLMs’ Risk Preferences - MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- 4 Experiments
- Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Why Ethics Matter in LLM Attacks? - Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- 5 Generative AI: The New Frontier
- Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Results - Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
- III. Analysis
Appendix C Algorithmic / technical aspects - Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- 3 Strategies in Securing Large Language
models
- Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health
/ 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
- ACKNOWLEDGMENTS
- AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- 2 Background
- A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Why audit generative AI systems? - Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 3 The need to audit AI systems – a confluence of top-down and bottom-up pressures
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- I. Introduction
IV. Proposing an Alternative 3C Framework - Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
- 2 Literature Review
3 Methodology
4 Data Analysis and Results
6 Conclusion - Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
- A brief history of AI and generative AI
Applications of generative AI in literature reviews and evidence synthesis
Glossary - Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Introduction
Next Steps for AI Biosecurity Evaluations - Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 3 Assurance of AI Systems for Specific Functions
5 Assurance and Alignment for AGI
6 Summary and Conclusion - Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
- 2. Key Challenges of Artificial Data
3. OAK Dataset
Appendices - RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
III. Methodology
IV. Results
VI. Discussion
VII. Conclusion - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work - Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
- Introduction
I. The Why and How Behind LLMs
II. The Difference Between Academic and Commercial Research
III. A Guide for Data in LLM Research
IV. The Path Ahead - The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Data Sources
7 Environmental Impact
8 Model Evaluation - Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Generative AI
III. Language Modeling
V. Bridging Research Gaps and Future Directions - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- 2 Promises
3 Challenges - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- III. Generative AI
IV. Attack Methodology - Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- Acknowledgements
- A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
- 3 LLMs in Zero-Shot Biomedical Applications
- Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- 3. How GenAI Could Make a Difference
- DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
- 4 LLM Services (Infrastructure)
- On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
- 3 Large Language Models and Boden’s Three Criteria
- LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
- 4 Hate Classifier Model
8 Discussion
10 Ethical Considerations
A Reproducibility - Views on AI aren't binary -- they're plural / 2312.14230 / ISBN:https://doi.org/10.48550/arXiv.2312.14230 / Published by ArXiv / on (web) Publishing site
- Overcoming the dichotomy: How to build bridges
- Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Using language models to simulate roles and model cognitive processes
- Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- VI. Conclusion and Future directions
- Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
- Introduction
- Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 4 Results
- ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- 3 ValueCompass Framework
4 Experimental Settings
5 Results - Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 6 Conclusion
- GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Chatbot Ad Engine Design
A Appendix - XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experiments
Appendices - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 2. Literature Review
- Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
- 4 Characteristics of Publications
5 Aims & Objectives (RQ1) - Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Related Work
IV. Methodology
V. Results - Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
- IV. Results
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 An Analysis of Synthetically Generated Dilemma Vignettes and Human Values in Daily Dilemmas
4 Unweiling LLM's Value Preferences Through Action Choices in Everyday Dilemmas
5 Examining LLM's Adherence to Design Principles and the Steerability of Value Preferences
6 Related Work
7 Conclusion - AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Experimental Setup - From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
- Introduction
The multiple levels of AI impact
The emerging social impacts of ChatGPT
Discussion - How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
- 3 Methods
5 Mechanisms of Industry Influence in US AI Policy - Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Analysis - Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
- II. Background and Concepts
VI. Research Gaps and Future Directions - Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
- 4 Evaluation
- Ethical Leadership in the Age of AI Challenges, Opportunities and Framework for Ethical Leadership / 2410.18095 / ISBN:https://doi.org/10.48550/arXiv.2410.18095 / Published by ArXiv / on (web) Publishing site
- Case Studies of Ethical Leadership in AI
- Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
- Large Language Model Selection
Deployment considerations - TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- Appendices
- Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
- Methodology
Conclusion - Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations / 2410.23432 / ISBN:https://doi.org/10.48550/arXiv.2410.23432 / Published by ArXiv / on (web) Publishing site
- 2 What is scraping?
Appendices - Using Large Language Models for a standard assessment mapping for sustainable communities / 2411.00208 / ISBN:https://doi.org/10.48550/arXiv.2411.00208 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 3 Method: Semi-structured Interviews
4 Findings - Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
- Introduction
Materials and methods - Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers / 2411.09224 / ISBN:https://doi.org/10.48550/arXiv.2411.09224 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Transformer Architecture
6 Content
7 Response Accuracy
9 Fairness
10 Limitations - Generative AI in Multimodal User Interfaces: Trends, Challenges, and Cross-Platform Adaptability / 2411.10234 / ISBN:https://doi.org/10.48550/arXiv.2411.10234 / Published by ArXiv / on (web) Publishing site
- II. Problem Statement: the Interface Dilemma
- Bias in Large Language Models: Origin, Evaluation, and Mitigation / 2411.10915 / ISBN:https://doi.org/10.48550/arXiv.2411.10915 / Published by ArXiv / on (web) Publishing site
- 4. Bias Evaluation
- Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / on (web) Publishing site
- 3 Experimental framework
4 Results
Appendices - GPT versus Humans: Uncovering Ethical Concerns in Conversational Generative AI-empowered Multi-Robot Systems / 2411.14009 / ISBN:https://doi.org/10.48550/arXiv.2411.14009 / Published by ArXiv / on (web) Publishing site
- 2 Background
- AI-Augmented Ethical Hacking: A Practical Examination of Manual Exploitation and Privilege Escalation in Linux Environments / 2411.17539 / ISBN:https://doi.org/10.48550/arXiv.2411.17539 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Generative AI and ChatGPT
3 Laboratory Setup - Examining Multimodal Gender and Content Bias in ChatGPT-4o / 2411.19140 / ISBN:https://doi.org/10.48550/arXiv.2411.19140 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
5. Discussion on Content and Gender Biases in ChatGPT-4O - Towards a Practical Ethics of Generative AI in Creative Production Processes / 2412.03579 / ISBN:https://doi.org/10.48550/arXiv.2412.03579 / Published by ArXiv / on (web) Publishing site
- Ethics for AI in design
- Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview / 2412.03933 / ISBN:https://doi.org/10.48550/arXiv.2412.03933 / Published by ArXiv / on (web) Publishing site
- II. AI Text Generators (AITG)
- Large Language Models in Politics and Democracy: A Comprehensive Survey / 2412.04498 / ISBN:https://doi.org/10.48550/arXiv.2412.04498 / Published by ArXiv / on (web) Publishing site
- 2. Understanding Large Language Models
- Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusion - Digital Democracy in the Age of Artificial Intelligence / 2412.07791 / ISBN:https://doi.org/10.48550/arXiv.2412.07791 / Published by ArXiv / on (web) Publishing site
- 5. Public Sphere and Political Advocacy
- CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment / 2312.09402 / ISBN:https://doi.org/10.48550/arXiv.2312.09402 / Published by ArXiv / on (web) Publishing site
- Introduction
- Shaping AI's Impact on Billions of Lives / 2412.02730 / ISBN:https://doi.org/10.48550/arXiv.2412.02730 / Published by ArXiv / on (web) Publishing site
- II. Demystifying the Potential Impact on AI
- Research Integrity and GenAI: A Systematic Analysis of Ethical Challenges Across Research Phases / 2412.10134 / ISBN:https://doi.org/10.48550/arXiv.2412.10134 / Published by ArXiv / on (web) Publishing site
- Research Phases and AI Tools
Discussion - On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? / 2412.11698 / ISBN:https://doi.org/10.48550/arXiv.2412.11698 / Published by ArXiv / on (web) Publishing site
- IV. Discussions
- Bots against Bias: Critical Next Steps for Human-Robot Interaction / 2412.12542 / ISBN:https://doi.org/10.1017/9781009386708.023 / Published by ArXiv / on (web) Publishing site
- 3 Track: Against Bias in Robots
- Understanding and Evaluating Trust in Generative AI and Large Language Models for Spreadsheets / 2412.14062 / ISBN:https://doi.org/10.48550/arXiv.2412.14062 / Published by ArXiv / on (web) Publishing site
- 2.0 Trust in Automation
- Ethics and Technical Aspects of Generative AI Models in Digital Content Creation / 2412.16389 / ISBN:https://doi.org/10.48550/arXiv.2412.16389 / Published by ArXiv / on (web) Publishing site
- Appendices
- Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Taxonomy
5 Misuse
6 Autonomous AI Risks
7 Agent Safety
8 Interpretability for LLM Safety
9 Technology Roadmaps / Strategies to LLM Safety in Practice - Generative AI and LLMs in Industry: A text-mining Analysis and Critical Evaluation of Guidelines and Policy Statements Across Fourteen Industrial Sectors / 2501.00957 / ISBN:https://doi.org/10.48550/arXiv.2501.00957 / Published by ArXiv / on (web) Publishing site
- III. Qualitative Findings and Resultant Themes
VI. Concluding Remarks and Future Directions - INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
- 4 Method
- Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
- 2. Learning Morality in Machines
- Towards A Litmus Test for Common Sense / 2501.09913 / ISBN:https://doi.org/10.48550/arXiv.2501.09913 / Published by ArXiv / on (web) Publishing site
- 7 Mathematical Formulation for LLMs and AI
- Harnessing the Potential of Large Language Models in Modern Marketing Management: Applications, Future Directions, and Strategic Recommendations / 2501.10685 / ISBN:https://doi.org/10.48550/arXiv.2501.10685 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2- Technological Foundations - Development of Application-Specific Large Language Models to Facilitate Research Ethics Review / 2501.10741 / ISBN:https://doi.org/10.48550/arXiv.2501.10741 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Generative AI for IRB review
IV. Application-Specific IRB LLMs - Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond / 2501.11457 / ISBN:https://doi.org/10.48550/arXiv.2501.11457 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices / 2501.16531 / ISBN:https://doi.org/10.48550/arXiv.2501.16531 / Published by ArXiv / on (web) Publishing site
- 2. Background
- The Third Moment of AI Ethics: Developing Relatable and Contextualized Tools / 2501.16954 / ISBN:https://doi.org/10.48550/arXiv.2501.16954 / Published by ArXiv / on (web) Publishing site
- 2 The Challenges of AI Ethics
- A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
/ 2501.18038 / ISBN:https://doi.org/10.48550/arXiv.2501.18038 / Published by ArXiv / on (web) Publishing site
- 4. The TELUS customer support language tool
- Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv.2501.18493 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Agentic AI: Expanding the Algorithmic Frontier of Creative Problem Solving / 2502.00289 / ISBN:https://doi.org/10.48550/arXiv.2502.00289 / Published by ArXiv / on (web) Publishing site
- Introduction
- Constructing AI ethics narratives based on real-world data: Human-AI collaboration in data-driven visual storytelling / 2502.00637 / ISBN:https://doi.org/10.48550/arXiv.2502.00637 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Open Foundation Models in Healthcare: Challenges, Paradoxes, and Opportunities with GenAI Driven Personalized Prescription / 2502.04356 / ISBN:https://doi.org/10.48550/arXiv.2502.04356 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
IV. Leveraging Open LLMs for Prescription: A Case Study - Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
- 3 Large Language Model Safety
- The Odyssey of the Fittest: Can Agents Survive and Still Be Good? / 2502.05442 / ISBN:https://doi.org/10.48550/arXiv.2502.05442 / Published by ArXiv / on (web) Publishing site
- Method
- A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and its Emergence in University-Level Academic Writing / 2502.05698 / ISBN:https://doi.org/10.48550/arXiv.2502.05698 / Published by ArXiv / on (web) Publishing site
- Introduction
- Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv.2502.06059 / Published by ArXiv / on (web) Publishing site
- 3 Ambiguity and Conflicts in HHH
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Appendices
- Relational Norms for Human-AI Cooperation / 2502.12102 / ISBN:https://doi.org/10.48550/arXiv.2502.12102 / Published by ArXiv / on (web) Publishing site
- Introduction
- Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
- Executive Summary
2 Failure Modes
3 Risk Factors
4 Implications
Appendices - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 Guidelines of Trustworthy Generative Foundation Models
4 Designing TrustGen, a Dynamic Benchmark Platform for Evaluating the Trustworthiness of GenFMs
5 Benchmarking Text-to-Image Models
8 Other Generative Models
9 Trustworthiness in Downstream Applications
10 Further Discussion - Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
- II. Background and Challenges
- Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Methodology
3. Results
4. Discussion - Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning / 2502.19798 / ISBN:https://doi.org/10.48550/arXiv.2502.19798 / Published by ArXiv / on (web) Publishing site
- Experiments and Results
- An LLM-based Delphi Study to Predict GenAI Evolution / 2502.21092 / ISBN:https://doi.org/10.48550/arXiv.2502.21092 / Published by ArXiv / on (web) Publishing site
- 2 Methods
- Evaluating Large Language Models on the Spanish Medical Intern Resident (MIR) Examination 2024/2025:A Comparative Analysis of Clinical Reasoning and Knowledge Application / 2503.00025 / ISBN:https://doi.org/10.48550/arXiv.2503.00025 / Published by ArXiv / on (web) Publishing site
- 2.Theoretical Framework
- Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions / 2503.00940 / ISBN:https://doi.org/10.48550/arXiv.2503.00940 / Published by ArXiv / on (web) Publishing site
- 6 Conclusion
- Digital Dybbuks and Virtual Golems: AI, Memory, and the Ethics of Holocaust Testimony / 2503.01369 / ISBN:https://doi.org/10.48550/arXiv.2503.01369 / Published by ArXiv / on (web) Publishing site
- Permissibility of digital duplicates
- Jailbreaking Generative AI: Empowering Novices to Conduct Phishing Attacks / 2503.01395 / ISBN:https://doi.org/10.48550/arXiv.2503.01395 / Published by ArXiv / on (web) Publishing site
- II. Methodology for Launching the Phishing Attack
- Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
- 2 LLM Hallucinations in Medicine
5 Mitigation Strategies
6 Experiments on Medical Hallucination Benchmark
Appendices - Mapping out AI Functions in Intelligent Disaster (Mis)Management and AI-Caused Disasters / 2502.16644 / ISBN:https://doi.org/10.48550/arXiv.2502.16644 / Published by ArXiv / on (web) Publishing site
- 5. Conclusion and Future Directions
- AI Governance InternationaL Evaluation Index (AGILE Index)
/ 2502.15859 / ISBN:https://doi.org/10.48550/arXiv.2502.15859 / Published by ArXiv / on (web) Publishing site
- 3. Analysis and Observations
- MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Literature Review
5 Methodology
6 Results - DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Methodology
Referemces
Appendices - Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / on (web) Publishing site
- 2 Transparency of CoT in Current LLMs
3 Arguments pro Transparent CoT
4 Arguments against Transparent CoT - Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy / 2503.14539 / ISBN:https://doi.org/10.48550/arXiv.2503.14539 / Published by ArXiv / on (web) Publishing site
- Introduction
- The Role of Legal Frameworks in Shaping Ethical Artificial Intelligence Use in Corporate Governance / 2503.14540 / ISBN:https://doi.org/10.48550/arXiv.2503.14540 / Published by ArXiv / on (web) Publishing site
- II. Legal Frameworks and Corporate AI Governance: Current Landscape and
Emerging Approaches
- A Peek Behind the Curtain: Using Step-Around Prompt Engineering to Identify Bias and Misinformation in GenAI Models / 2503.15205 / ISBN:https://doi.org/10.48550/arXiv.2503.15205 / Published by ArXiv / on (web) Publishing site
- Introduction
Literature Review
Step-Around Prompting: A Research Tool and Potential Threat
Ethics of Step-Around Prompting - HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT / 2503.18994 / ISBN:https://doi.org/10.48550/arXiv.2503.18994 / Published by ArXiv / on (web) Publishing site
- 3 Standards and Guidelines
- BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models
/ 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
- 2 Proposed Framework - BEATS
7 Appendix - Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia / 2504.00652 / ISBN:https://doi.org/10.48550/arXiv.2504.00652 / Published by ArXiv / on (web) Publishing site
- IV. Comparative Analysis
- Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
/ 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / on (web) Publishing site
- Appendices
- Who Owns the Output? Bridging Law and Technology in LLMs Attribution / 2504.01032 / ISBN:https://doi.org/10.48550/arXiv.2504.01032 / Published by ArXiv / on (web) Publishing site
- 2 Legal Background
3 From Legal Frameworks to Technological Solutions
4 Legal cases as Use Cases for Attribution Methods - Language-Dependent Political Bias in AI: A Study of ChatGPT and Gemini / 2504.06436 / ISBN:https://doi.org/10.48550/arXiv.2504.06436 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Artificial Intelligence
5. Conclusion - We Are All Creators: Generative AI, Collective Knowledge, and the Path Towards Human-AI Synergy / 2504.07936 / ISBN:https://doi.org/10.48550/arXiv.2504.07936 / Published by ArXiv / on (web) Publishing site
- 4 Navigating the Copyright Labyrinth: Collective
Input, Individual Output?
- ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- Appendices
- Towards interactive evaluations for interaction harms in human-AI systems / 2405.10632 / ISBN:https://doi.org/10.48550/arXiv.2405.10632 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- A Comprehensive Survey on Integrating Large Language Models with Knowledge-Based Methods / 2501.13947 / ISBN:https://doi.org/10.48550/arXiv.2501.13947 / Published by ArXiv / on (web) Publishing site
- 2. Overview of LLMs
3. Challenges in implementing LLMs for real-world scenarios
5. Integrating LLMs with knowledge bases - Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation
/ 2502.05151 / ISBN:https://doi.org/10.48550/arXiv.2502.05151 / Published by ArXiv / on (web) Publishing site
- 3 AI Support for Individual Topics and Tasks
- Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future / 2502.08650 / ISBN:https://doi.org/10.48550/arXiv.2502.08650 / Published by ArXiv / on (web) Publishing site
- 4 Best Practices for Responsible Generative AI and Existing Frameworks
- Framework, Standards, Applications and Best practices of Responsible AI : A Comprehensive Survey / 2504.13979 / ISBN:https://doi.org/10.48550/arXiv.2504.13979 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- 3. Challenges of current AI methods in education: The Elephant in the room
4. Hybrid human-AI methods for responsible AI for education
5. Conclusion - Auditing the Ethical Logic of Generative AI Models / 2504.17544 / ISBN:https://doi.org/10.48550/arXiv.2504.17544 / Published by ArXiv / on (web) Publishing site
- Auditing the Ethical Logic of Generative AI
Seven Contemporary LLMs
Reasoning and Chain-of-Thought AI Models
Auditing the Reasoning Models - AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How / 2504.18044 / ISBN:https://doi.org/10.48550/arXiv.2504.18044 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 Methodology
4 Result
5 Discussion
Appendix - The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach
/ 2504.19255 / ISBN:https://doi.org/10.48550/arXiv.2504.19255 / Published by ArXiv / on (web) Publishing site
- Abstract
Methodology - Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room / 2504.16148 / ISBN:https://doi.org/10.48550/arXiv.2504.16148 / Published by ArXiv / on (web) Publishing site
- 3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness - AI Awareness / 2504.20084 / ISBN:https://doi.org/10.48550/arXiv.2504.20084 / Published by ArXiv / on (web) Publishing site
- 3 Evaluating AI Awareness in LLMs
4 AI Awareness and AI Capabilities
5 Risks and Challenges of AI Awareness - Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation / 2504.21574 / ISBN:https://doi.org/10.48550/arXiv.2504.21574 / Published by ArXiv / on (web) Publishing site
- 7. Recommendations for Secure AI Adoption
- Emotions in the Loop: A Survey of Affective Computing for Emotional Support / 2505.01542 / ISBN:https://doi.org/10.48550/arXiv.2505.01542 / Published by ArXiv / on (web) Publishing site
- IV. Applications and Approaches
- Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
6 Results - GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions / 2505.05523 / ISBN:https://doi.org/10.48550/arXiv.2505.05523 / Published by ArXiv / on (web) Publishing site
- 2. Background
- AI and Generative AI Transforming Disaster Management: A Survey of Damage Assessment and Response Techniques / 2505.08202 / ISBN:https://doi.org/10.48550/arXiv.2505.08202 / Published by ArXiv / on (web) Publishing site
- II Domain-Specific Literature Review
- Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / on (web) Publishing site
- I Introduction
II Background - WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models / 2505.09595 / ISBN:https://doi.org/10.48550/arXiv.2505.09595 / Published by ArXiv / on (web) Publishing site
- 3 Methodology and System Design
4 Benchmarking and Intervention Strategies
5 Results - Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- AI LEGO: Scaffolding Cross-Functional Collaboration in Industrial Responsible AI Practices during Early Design Stages / 2505.10300 / ISBN:https://doi.org/10.48550/arXiv.2505.10300 / Published by ArXiv / on (web) Publishing site
- 4 AI LEGO
- From Automation to Autonomy: A Survey on Large Language Models in Scientific Discovery / 2505.13259 / ISBN:https://doi.org/10.48550/arXiv.2505.13259 / Published by ArXiv / on (web) Publishing site
- 3 Level 1. LLM as Tool (Table A1)
- AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals / 2505.15365 / ISBN:https://doi.org/10.48550/arXiv.2505.15365 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Theoretical Background
3 Methodology - Exploring Moral Exercises for Human Oversight of AI systems: Insights from Three Pilot Studies / 2505.15851 / ISBN:https://doi.org/10.48550/arXiv.2505.15851 / Published by ArXiv / on (web) Publishing site
- Appendix
- Advancing the Scientific Method with Large Language Models: From Hypothesis to Discovery / 2505.16477 / ISBN:https://doi.org/10.48550/arXiv.2505.16477 / Published by ArXiv / on (web) Publishing site
- Current use of LLMs – From Specialised Scientific Copilots to LLM-
assisted Scientific Discoveries
- Cultural Value Alignment in Large Language Models: A Prompt-based Analysis of Schwartz Values in Gemini, ChatGPT, and DeepSeek / 2505.17112 / ISBN:https://doi.org/10.48550/arXiv.2505.17112 / Published by ArXiv / on (web) Publishing site
- Methods
- SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use / 2505.17332 / ISBN:https://doi.org/10.48550/arXiv.2505.17332 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Simulating Ethics: Using LLM Debate Panels to Model Deliberation on Medical Dilemmas / 2505.21112 / ISBN:https://doi.org/10.48550/arXiv.2505.21112 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5. Discussion - Are Language Models Consequentialist or Deontological Moral Reasoners? / 2505.21479 / ISBN:https://doi.org/10.48550/arXiv.2505.21479 / Published by ArXiv / on (web) Publishing site
- 4 Methodology
Appendix - Toward Effective AI Governance: A Review of Principles / 2505.23417 / ISBN:https://doi.org/10.48550/arXiv.2505.23417 / Published by ArXiv / on (web) Publishing site
- VI Conclusion
- SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM Agents / 2505.23559 / ISBN:https://doi.org/10.48550/arXiv.2505.23559 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 SciSafetyBench - Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking / 2505.23930 / ISBN:https://doi.org/10.48550/arXiv.2505.23930 / Published by ArXiv / on (web) Publishing site
- Results
- Feeling Guilty Being a c(ai)borg: Navigating the Tensions Between Guilt and Empowerment in AI Use / 2506.00094 / ISBN:https://doi.org/10.48550/arXiv.2506.00094 / Published by ArXiv / on (web) Publishing site
- 3. Methodological Approach
- DeepSeek in Healthcare: A Survey of Capabilities, Risks, and Clinical Applications of Open-Source Large Language Models / 2506.01257 / ISBN:https://doi.org/10.48550/arXiv.2506.01257 / Published by ArXiv / on (web) Publishing site
- Introduction
Comparisons with Other Models
Clinical Applications
Strenghts and Limitations of DeepSeek-R1
Discussion
Appendix - Machine vs Machine: Using AI to Tackle Generative AI Threats in Assessment / 2506.02046 / ISBN:https://doi.org/10.48550/arXiv.2506.02046 / Published by ArXiv / on (web) Publishing site
- 3. The Eight Elements of Static Analysis: Theoretical Justification
- Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation / 2506.02992 / ISBN:https://doi.org/10.48550/arXiv.2506.02992 / Published by ArXiv / on (web) Publishing site
- 4 Experimental Design
- HADA: Human-AI Agent Decision Alignment Architecture / 2506.04253 / ISBN:https://doi.org/10.48550/arXiv.2506.04253 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs / 2506.05887 / ISBN:https://doi.org/10.48550/arXiv.2506.05887 / Published by ArXiv / on (web) Publishing site
- 4 Case Studies: Applying the Multilevel Framework with LLMs
- Surgeons Awareness, Expectations, and Involvement with Artificial Intelligence: a Survey Pre and Post the GPT Era / 2506.08258 / ISBN:https://doi.org/10.48550/arXiv.2506.08258 / Published by ArXiv / on (web) Publishing site
- 2. Methods