if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: output
Bibliography items where occurs: 144
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Report highlights
Chapter 3 Technical AI Ethics
Chapter 5 AI Policy and Governance - The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
- 5 Evaluation of Ethical Principle Implementations
- Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
- References
- Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Underlying Aspects
III. Interactions between Aspects - QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
- 2 The Question Bank: QB4AIRA
- From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts / 2307.15452 / ISBN:https://doi.org/10.48550/arXiv.2307.15452 / Published by ArXiv / on (web) Publishing site
- 3. Results
- The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practice, and Governance / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Theory
4. Ethical Implications of AI Value Chains
5. Future Directions for Research, Practice, & Policy - Perceptions of the Fourth Industrial Revolution and Artificial Intelligence Impact on Society / 2308.02030 / ISBN:https://doi.org/10.48550/arXiv.2308.02030 / Published by ArXiv / on (web) Publishing site
- Literature Review
- From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
- What is Generative Artificial Intelligence?
GREAT PLEA Ethical Principles for Generative AI in Healthcare - Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
System-role
Perturbation
Image-related
Hallucination
Generation-related
Conclusion - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- 2 Background
5 Crowdsourced safety mechanism - Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
- Introduction
Computers, Autonomy and Accountability - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
4 General Verification Framework
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor
Reference - Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- 3 LLM-based penetration testing
4 Discussion
5 A vision of AI-augmented pen-testing - Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
- 2 Related works
3 Targeted data augmentation - AIxArtist: A First-Person Tale of Interacting with Artificial Intelligence to Escape Creative Block / 2308.11424 / ISBN:https://doi.org/10.48550/arXiv.2308.11424 / Published by ArXiv / on (web) Publishing site
- Abstract
Case study - Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- II. Methods and training process of LLMs
IV. Applied and technology implications for LLMs
V. Market analysis of LLMs and cross-industry use cases
VI. Solution architecture for privacy-aware and trustworthy conversational AI - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 3 Theory and method
- The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
- Table of contents and index
2 Key AI technology in financial services
4 Threaths & potential pitfalls - Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Black box and lack of transparency
3 Bias and fairness - The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related work
5. Discussion - Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Introduction
Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 1 - 2 Appreciate Systems: Mimicking Styles
Part 1 - 3 Artistic Systems: Mimicking Inspiration
Part 2 - 1 Biometric Signal Sensing Technologies and Emotion Data
Part 2 - 4 Aesthetic Descriptor: Labelling Artefacts with Emotion
Part 3 - 1 Challenges in Endowing Machines with Creative Abilities
Part 3 - 2 Machine Artist Models
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework
Part 5 - 1 Authorship and Ownership of AI-generated Works of Artt
Part 5 - 2 Algorithmics Bias in Art Generation
Part 5 - 3 Democratization of Art with new Technologies - FUTURE-AI: Guiding Principles and Consensus
Recommendations for Trustworthy Artificial Intelligence in
Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 2. Fairness - For Equitable AI in Medical Imaging
4. Traceability - For Transparent and Dynamic AI in Medical Imaging
5. Usability - For Effective and Beneficial AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging
8. FUTURE-AI Quality Check - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- 4 Experiments
F Evaluation of GPT Models
Cambridge Law Corpus: Datasheet - EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Modeling Ethics
5 Experiments
6 Conclusions
Appendix - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- II. AI-Robotics Systems Architecture
IV. Attack Surfaces - Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
4 Taxonomy of AI Privacy Risks - ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
- Theoretical Impact of LLMs on
Information Operations
ClausewitzGPT and Modern Strategy
Ethical and Strategic Considerations: AI Mediators in the Age of LLMs
Integrating Computational Social Science, Computational Ethics, Systems Engineering, and AI Ethics in LLMdriven Operations
Looking Forward: ClausewitzGPT - The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 3 Analysis and Findings
- A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- II. WHAT LLMS CAN DO FOR HEALTHCARE? FROM
FUNDAMENTAL TASKS TO ADVANCED APPLICATIONS
III. FROM PLMS TO LLMS FOR HEALTHCARE
IV. TRAIN AND USE LLM FOR HEALTHCARE
V. EVALUATION METHOD
VI. IMPROVING FAIRNESS, ACCOUNTABILITY, TRANSPARENCY, AND ETHICS
VII. FUTURE WORK AND CONCLUSION - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- 3 The applications of STREAM
- Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 4. Systematic Review and Scientometric Analysis
5. Ethical Issues of AI and Robotics in AEC Industry
References - Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
- Ethics of GEnerating Native Ads
Conclusion - Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 2. Autonomous vehicles
6. Risk management - The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
- 4. A Holistic Framework
- An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
- 2 Datasets and Methods
- Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
- Trust and AI Ethics Principles
Trust in AI as Socio-Technical Systems - Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 4. Proposed Novel Topics in an Ethics of AI Belief
- Ensuring Trustworthy Medical Artificial Intelligence through Ethical and Philosophical Principles / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- Towards solving key ethical challenges in Medical AI
- Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- 4 Process Patterns
5 Product Patterns - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- 3 FUTURE-AI Guideline
Appendix A Tables - Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- Appendix B Experiment Details
- Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 AI feedback on specific problematic AI traits
3 Generalization from a Simple Good for Humanity Principle
6 Discussion - Systematic AI Approach for AGI:
Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
- 2 Trifecta of AI Challenges
- AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Arrow-Sen Impossibility Theorems for RLHF
5 Conclusion - A Comprehensive Review of
AI-enabled Unmanned Aerial Vehicle:
Trends, Vision , and Challenges / 2310.16360 / ISBN:https://doi.org/10.48550/arXiv.2310.16360 / Published by ArXiv / on (web) Publishing site
- IV. Artificial Intelligence Embedded UAV
- Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- 2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models - AI for Open Science: A Multi-Agent Perspective for
Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
- 5 Why Openness in AI for Science
- Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a
focus group report / 2311.00903 / ISBN:https://doi.org/10.48550/arXiv.2311.00903 / Published by ArXiv / on (web) Publishing site
- AI tool-specific educational concerns
- Human Participants in AI Research: Ethics and Transparency in Practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
- 2 Contextual Concerns: Why AI Research Needs its Own Guidelines
A Evaluating Current Practices for Human-Participants Research - LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 The Meaning Model - Unlocking the Potential of ChatGPT A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- 2 Overview of ChatGPT and its capabilities
3 Transformers and pre-trained language models
4 Applications of ChatGPT in real-world scenarios
6 Limitations and potential challenges
7 Ethical considerations when using ChatGPT
8 Prompt engineering and generation
11 Conclusion - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- II. Sources of bias in AI
IV. Mitigation strategies for bias in AI
V. Fairness in AI
VI. Mitigation strategies for fairness in AI - Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 A Multimodal Ethics Classifier - A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- I. Pre-introduction
II. Introduction
IV. History of NLP between 2010 and 2015: the pre-attention mechanism era
VI. 2015: birth of the transformer
IX. 2019: THE YEAR OF CONTROL - Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related work
3 Method
4 Findings
5 Discussion
6 Conclusion - She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes
5 Conclusion - Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control / 2311.08943 / ISBN:https://doi.org/10.48550/arXiv.2311.08943 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Safety
References - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Methodology
5 Conclusion
Ethical Considerations - Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- 3 Experiments
B Confidence Elicitation Method Comparison
D Additional Results and Figures
E Prompt Templates - Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Chatbots Background and Scope of Research
3. Chatbot approaches overview: Taxonomy of existing methods
4. ChatGPT
5. Applications
7. Future Research Directions
References - Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Proposed Process - Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
- 4 Findings
B Methodology - Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- A Overview of AIIA Instruments
- GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- III. Approach: capturing and representing heuristics behind GPT's decision-making process
IV. Comparative results - Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- III. Key technologies for EDULLMS
VI. Challenges and future directions - The Rise of Creative Machines: Exploring the Impact of Generative AI / 2311.13262 / ISBN:https://doi.org/10.48550/arXiv.2311.13262 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 Methodology - Ethical implications of ChatGPT in higher education: A scoping review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Introduction
Findings - Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- Introduction
Overview of societal biases in GAI models
Conclusion - RAISE -- Radiology AI Safety, an End-to-end lifecycle approach / 2311.14570 / ISBN:https://doi.org/10.48550/arXiv.2311.14570 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Pre-Deployment phase
3. Production deployment monitoring phase
4. Post-market surveillance phase
5. Conclusion - Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 4. Addressing bias, transparency, and accountability
- Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
- V. Potential harms and mitigation
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 2 Privacy and data protection
3 Transparency and explainability
4 Fairness and equity
5 Responsiblity, accountability, and regulations
6 Environmental impact - Navigating Privacy and Copyright Challenges Across the Data Lifecycle of Generative AI / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
- 2 Legal Basis of Privacy and Copyright Concerns over Generative AI
3 Mapping Challenges throughout the Data Lifecycle
4 Lifecycle Approaches - Integrating AI into CCTV Systems: A Comprehensive Evaluation of Smart Video Surveillance in Community Space / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
- III. System features
IV. System setup and configuration - Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- 3. Literature review
5. Results - Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. The pitfalls in detecting generative AI output
3. Detectors are not useful
4. Teach critical usage of AI
References - Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
7 Mathematically modeling intelligence
8 Consciousness
9 Augmenting human intelligence
12 Large language models and Generative AI - Ethical Considerations Towards Protestware / 2306.10019 / ISBN:https://doi.org/10.48550/arXiv.2306.10019 / Published by ArXiv / on (web) Publishing site
- II. Background
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 2 Risks of Misuse for Artificial Intelligence in
Science
3 Control the Risks of AI Models in Science
5 Discussion
Appendix A Assessing the Risks of AI Misuse in Scientific Research
Appendix C Detailed Implementation of SciGuard - Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
- General Discussion
Moral Factors - Navigating the generative AI era: Introducing the AI assessment scale for ethical GenAI assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Literature
The AI Assessment Scale
Conclusion - Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- Culturally responsive AI – current landscape
Recommendations - Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- Appendix A – Survey Questionnaire
Appendix B – Interview Questionnaire - Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Introduction
Methods - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 2. LLMs in cognitive and behavioral psychology
5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology
7. Challenges and future directions - Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 3. The usage of synthetic data
- AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- II. Related work
III. Methods
IV. Results
V. Discussion and suggestions
VII. Conclusion - Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- Discussion
PRISMA 2020 flow diagram and bias handling workflow - Resolving Ethics Trade-offs in Implementing Responsible AI / 2401.08103 / ISBN:https://doi.org/10.48550/arXiv.2401.08103 / Published by ArXiv / on (web) Publishing site
- III. Discussion and Recommendations
- Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
9 Towards fairness through time - FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- 3 Data Management Challenges in Large Language Models
4 Framework for FAIR Data Principles Integration in LLM Development - Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
- 1. Motivation for White Paper
- Five ethical principles for generative AI in scientific research / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Principle 1: Understand model training and output
Principle 2: Respect privacy, confidentiality, and copyright
Principle 3: Avoid plagiarism and policy violations
Principle 5: Use AI transparently and reproducibly
Concluding remarks - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- 2 Related work
3 Methods
5 Towards evaluation of RAI tool effectiveness
A List of RAI tools, with their primary publication
B RAI tools listed by target stage of AI development - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 2 Generation
3 Detection - Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
- 3 Beyond technical dimensions
- Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 4. Findings
5. Discussion - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cube / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Methodology
D. CausalRating: A Tool To Rate Sentiments Analysis Systems for Bias - Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
- 4 Recommendations to address threats posed by crossover AI technology
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 3 Methods: case-based expert deliberation
4 Results - POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 4 The POLARIS framework
- Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
- 4 Challenges, Threats and Limitations
- Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Results
Discussion
Appendix - How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Results - I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- B Experimental Settings & Results
- Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Results - Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Emerging Management-based AI Regulation
4 Techniques of Human-Guided Training
5 Advantages of Human-Guided Training
6 Limitations - User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- 4 Current Taxonomy
- Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- II. Background and Related Work
IV. Technological Aspects
Appendix A Examples of Benchmark Inadequacies in Technological Aspects
Appendix C Examples of Benchmark Inadequacies in Human Dynamics - Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Emergence of Free-Formed AI Collectives
3. Enhanced Performance of Free-Formed AI Collectives - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- 3 CosmoAgent Simulation Setting
4 CosmoAgent Architecture
7 Results - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
- 3 There is no reliable AI regulation without a
sound theory of human-AI interaction
- Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Analysis - Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
- IV. AI’S Role in the Emerging Trend of Internet of
Things (IOT) Ecosystem for Autonomous Vehicles
- Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
- 2 The Suitability of Generative AI for Newsroom Tasks
- Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Discussion - Updating the Minimum Information about CLinical Artificial Intelligence (MI-CLAIM) checklist for generative modeling research / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Abstract
Part 1A. Study design for generative modeling
Part 1C. Bias, privacy, and harm assessments
Part 2. A new train-test split for prompt development and few-shot learning
Part 3. Updates to baseline selection
Part 4. Model evaluation
Part 4A. Automated model evaluation
Part 4B. Human model evaluation
Part 5. Interpretability of generative models
Table 1. Updated MI-CLAIM checklist for generative AI clinical studies. - Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- II. Background & Literature Review
III. The AI-Enhanced CTI Processing Pipeline
IV. Challenges and Considerations - A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- 2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- B Baseline Setup
D More Results - AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
- 3. The Potentials of AGI in Transforming Future Education
- Moral Judgments in Narratives on Reddit Investigating Moral Sparks via Social Commonsense and Linguistic Signals / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- 3. Data
- Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
- 3. Analysis
- Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
- VI. Ethics and Morality
- Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
- 3 Research Methodology
4 Results of the Systematic Literature Review - Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- 2 Attacking GenAI
4 Cyber Defence
5 Implications of Generative AI in Social, Legal, and Ethical Domains - Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 4 Experiment
- Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 3. Findings
4. Discussion
5. Concluding Remarks and Future Directions - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Introduction
Methodology
Data
Results - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- 2 Trustworthy AI Too Many Definitions or Lack Thereof?
4 AI Regulation: Current Global Landscape
5 Risk
7 Explainable AI as an Enabler of Trustworthy AI
A Appendix - Analyzing Potential Solutions Involving Regulation to Escape Some of AI's Ethical Concerns / 2403.15507 / ISBN:https://doi.org/10.48550/arXiv.2403.15507 / Published by ArXiv / on (web) Publishing site
- Various AI Ethical Concerns
A Possible Solution to These Concerns With Business Self-Regulation - The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 3 Conceptualizing Fairness and Bias in ML
5 Ways to mitigate bias and promote Fairness - Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
- 1 Motivation
3 Blueprints for AI Evaluation in Journalism - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
- 3 Implications of the AI Act
- AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
- 3. The definition of artificial intelligence systems
5. Human Oversight
6. Large Language Models (LLMs) - Introduction
8. Conclusions - Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- 2 Applications of Large Language Models in Legal Tasks
3 Fine-Tuned Large Language Models in Various Countries and Regions
4 Legal Problems of Large Languge Models - A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models
6 Model Tuning
7 Model Evaluation and Benchmarking - Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Generative Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- A Primer
Polarised Responses