if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: values
Bibliography items where occurs: 309
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Chapter 1 Reseach and Development
Chapter 3 Technical AI Ethics
Chapter 5 AI Policy and Governance
Appendix - Exciting, Useful, Worrying, Futuristic:
Public Perception of Artificial Intelligence in 8 Countries / 2001.00081 / ISBN:https://doi.org/10.48550/arXiv.2001.00081 / Published by ArXiv / on (web) Publishing site
- 4 Findings
5 Discussion - Ethics of AI: A Systematic Literature Review of Principles and Challenges / 2109.07906 / ISBN:https://doi.org/10.48550/arXiv.2109.07906 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
5 Detail results and analysis - The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Evaluation of Ethical AI Principles
5 Evaluation of Ethical Principle Implementations
6 Gap Mitigation - A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
- Executive summary
Introductionn
2. Defining ethical AI
Conclusion - Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Methodology
4 Results
5 Discussion - Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society / 2001.04335 / ISBN:https://doi.org/10.48550/arXiv.2001.04335 / Published by ArXiv / on (web) Publishing site
- 3 The Problem with the Near/Long-Term Distinction
- ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
- References
- On the Current and Emerging Challenges of Developing Fair and Ethical AI Solutions in Financial Services / 2111.01306 / ISBN:https://doi.org/10.48550/arXiv.2111.01306 / Published by ArXiv / on (web) Publishing site
- 3 Practical Challengesof Ethical AI
- What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Proposed competency framework for responsible AI practitioners
Appendix A supplementary material - Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
- 2 The Question Bank: QB4AIRA
- A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introductioon
3. International and National Governance
4. Corporate Self-Governance
6. Psychology of Trust
8. Ethics and Trust Lenses in the Multilevel Framework
9. Virtue Ethics - From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts / 2307.15452 / ISBN:https://doi.org/10.48550/arXiv.2307.15452 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Results
References - The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Ethical Implications of AI Value Chains - From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
- What is Generative Artificial Intelligence?
GREAT PLEA Ethical Principles for Generative AI in Healthcare - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Policy scope
6 The dual governance framework
7 Limitations
8 Conclusion - Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Taxonomy of ethical principles
4 Previous operationalisation of ethical principles
5 Gaps in operationalising ethical principles
6 Conclusion
References - Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
- Introduction
Responsibility in War
Computers, Autonomy and Accountability
AI Workplace Health and Safety Framework - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 2 Large Language Models
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor - Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust / 2308.09979 / ISBN:https://doi.org/10.48550/arXiv.2308.09979 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Results
3 Discussion
5 Supplementary material
References - Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
- 3 Targeted data augmentation
4 Experiments
5 Conclusions - Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming / 2308.11649 / ISBN:https://doi.org/10.48550/arXiv.2308.11649 / Published by ArXiv / on (web) Publishing site
- 4 Enhancing User Experience through Creative AI Tools
11 Bias Awareness: Navigating AI-Generated Content in Education
14 Conclusion & Discussion - Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- 3 Reliability and Reproducibility Metrics for
Responsible Data Collection
5 Results
B Variability Analysis - The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
- 4 Integrating red teaming, blue teaming, and ethics with violet teaming
5 Research directions in AI safety and violet teaming
10 Supplemental & additional details
References - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 2 Related Works
3 Theory and Method
4 Experiment
5 Conclusion
Limitations
References - The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
- 2 Key AI technology in financial services
- Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Bias and fairness
4 Human-centric AI
6 Way forward - The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- 6. Conclusion
- Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Introduction
Part 1 - 2 Appreciate Systems: Mimicking Styles
Part 1 - 3 Artistic Systems: Mimicking Inspiration
Part 2 - 3 Photogrammetry / Volumetric Capture
Part 2 - 4 Aesthetic Descriptor: Labelling Artefacts with Emotion
Part 3 Towards a Machine Artist Model
Part 3 - 2 Machine Artist Models - FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 2. Fairness - For Equitable AI in Medical Imaging
4. Traceability - For Transparent and Dynamic AI in Medical Imaging
9. Discussion and Conclusion - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- Cambridge Law Corpus: Datasheet
- EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Dataset Construction
References - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- IV. Attack Surfaces
V. Ethical & Legal Concerns - If our aim is to build morality into an artificial agent, how might we begin to go about doing so? / 2310.08295 / ISBN:https://doi.org/10.48550/arXiv.2310.08295 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Proposing a Hybrid Approach
4 AI Governance Principles
5 Towards Moral AI - ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
- Mathematical Foundations
Ethical and Strategic Considerations: AI Mediators in the Age of LLMs
Integrating Computational Social Science, Computational Ethics, Systems Engineering, and AI Ethics in LLMdriven Operations
Looking Forward: ClausewitzGPT
Conclusion - The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Research Design and Methodology
3 Analysis and Findings
4 Discussion
References
B Pre-class Questionnaire (Verbatim)
D Post-Activity Questionnaire (Verbatim)
G Statistical Tests - A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 2. Literature Review
4. Implementing the Practical Use of Ethical AI Applications
5. Conclusions and Recommendations - A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- IV. TRAIN AND USE LLM FOR HEALTHCARE
VI. IMPROVING FAIRNESS , ACCOUNTABILITY, TRANSPARENCY, AND ETHICS
VII. FUTURE WORK AND CONCLUSION - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models
3 The applications of STREAM
References - Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
- 3 LLMs: Risk and Uncertainty
References - Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. The practice of multilateral negotiation and the mechanisms of compromises
3. The liberal-sovereigntist multiplicity
4. Towards a compromise: drafting the normative hybridity
5. Text negotiations as normative testing
6. Conclusion - Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence / 2309.14617 / ISBN:https://doi.org/10.48550/arXiv.2309.14617 / Published by ArXiv / on (web) Publishing site
- Method
- Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 2. Autonomous vehicles
5. Cybersecurity Risks
7. Issues - The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
- 3. Return on Investment (ROI)
4. A Holistic Framework - An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
References
A GPT-4’s Training Set Personality Profile - Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
- Different Types of Trust
Trust and AI Ethics Principles - In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice / 2309.10215 / ISBN:https://doi.org/10.48550/arXiv.2309.10215 / Published by ArXiv / on (web) Publishing site
- 5 Relating Case Studies to Indigenous Data Sovereignty and CARE Principles
- Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 3. Proposed Novel Topics in an Ethics of AI Belief
References - Ensuring Trustworthy Medical Artificial Intelligence through Ethical and Philosophical Principles / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- Conclusion and future directions
- Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- 2 Methodology
3 Governance Patterns
4 Process Patterns
5 Product Patterns
6 Related Work
References - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- FUTURE-AI GUIDELINE
- Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- 4 Reinforcement Learning with Good-for-Humanity Preference Models
G Over-Training on Good for Humanity
H Samples
I Responses on Prompts from PALMS, LaMDA, and InstructGPT - The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 4 Findings
- Systematic AI Approach for AGI:
Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
- 5 System Design for AI Alignment
- AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Arrow-Sen Impossibility Theorems for RLHF
4 Implications for AI Governance and Policy
References - A Comprehensive Review of
AI-enabled Unmanned Aerial Vehicle:
Trends, Vision , and Challenges / 2310.16360 / ISBN:https://doi.org/10.48550/arXiv.2310.16360 / Published by ArXiv / on (web) Publishing site
- IV. Artificial Intelligence Embedded UAV
- Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen
5 Conclusion
References - Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
- 2 Causal Models
3 The BvH and HK Definitions
4 The Causal Condition
6 Degree of Responsibility - AI for Open Science: A Multi-Agent Perspective for
Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
- References
- Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a
focus group report / 2311.00903 / ISBN:https://doi.org/10.48550/arXiv.2311.00903 / Published by ArXiv / on (web) Publishing site
- AI Ethics in Cybersecurity
- Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
- I. Introduction
V. Conclusion
References
Author
Appendix B Placing Research Ethics for Human Participans in Historical Context
Appendix C Defining the Scope of Research Participation in AI Research - LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 A General Theory of Meaning
4 The Moral Model
5 Conclusion
A Supplementary Material
References - Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and
Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
- Research questions
- Towards Effective Paraphrasing for Information
Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
- 4 Evaluation
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Measuring Fairness Metrics
5 Aligning with Deontological Principles: Use Cases - Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- 11 Conclusion
- Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Related Work
3 Crafting an Ethical Dataset - A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- IX. 2019: THE YEAR OF CONTROL
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Method
4 Findings
5 Discussion
References - She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Empirical Evaluation and Outcomes
References - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Experiments
References - Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- 3 Experiments
- Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
4 Findings
5 Discussion
7 Conclusion
References - Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Related Work and Discussion
4 Conclusion
References - Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- 3 Study Design
4 Findings
5 Discussion
References
B Study Materials - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- III. Approach: capturing and representing heuristics behind GPT's decision-making process
VI. Future work - Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
- Abstract
Requiring adverse impact statements for RAI research is long overdue
Suggestions for More Meaningful Engagement with the Impact of RAI Research
Concluding Reflections
References - Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- II. Education and LLMS
IV. LLM-empowered education - Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Introduction
Results - Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- OVERVIEW OF SOCIETAL BIASES IN GAI MODELS
ANALYTICAL FRAMEWORK - Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 4. Addressing bias, transparency, and accountability
5. Ethical AI design principles and guidelines
6. The role of AI in decision-making: ethical implications and potential consequences
7. Establishing responsible AI governance and oversight
9. Discussion on engaging stakeholders: fostering dialogue and collaboration between developers, users, and affected communities. - Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 2 Privacy and data protection
3 Transparency and explainability
5 Responsiblity, accountability, and regulations
References - Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms / 2312.04839 / ISBN:https://doi.org/10.48550/arXiv.2312.04839 / Published by ArXiv / on (web) Publishing site
- 3 Results
- RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related work
4 Results & analysis
6 Conclusion & future work - Ethical Considerations Towards Protestware / 2306.10019 / ISBN:https://doi.org/10.48550/arXiv.2306.10019 / Published by ArXiv / on (web) Publishing site
- V. Implications whit future directions
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 2 Risks of Misuse for Artificial Intelligence in
Science
3 Control the Risks of AI Models in Science
References
Appendix C Detailed Implementation of SciGuard - Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
- Abstract
...
Data Collection
Study 1: Geo-cultural Differences in Offensiveness
Study 2: Moral Foundations of Offensiveness
Study 3: Implications for Responsible AI
General Discussion
Geo-Cultural Factors
Moral Factors - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Problematizing The View Of GenAI Content As Academic Misconduct
Conclusion
References - Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- Recommendations
- Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- Abstract
V. Discussion - Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Objective
2 Background and significance
3 Materials and methods
4 Results
5 Discussion
References
A Extended Survey Results
B Extended Guiding Principles
C Full survey questions - Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems / 2312.12559 / ISBN:https://doi.org/10.48550/arXiv.2312.12559 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 The Reign of Algorithmic Fairness
3 Taking a Step Forward
4 Limitations
5 Conclusion
References - Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Problem Formulation
4 Learning Human Morality Judgments
5 Representational Alignment Supports Learning Multiple Human Values
6 Discussion
References - Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- V. Discussion
VI. Conclusion - Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Results
Discussion - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- References
- Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
5. LLMs in social and cultural psychology
7. Challenges and future directions - AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Related work
III. Methods
IV. Results
V. Discussion and suggestions
VII. Conclusion
References - Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- Discussion
- Resolving Ethics Trade-offs in Implementing Responsible AI / 2401.08103 / ISBN:https://doi.org/10.48550/arXiv.2401.08103 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Approaches for Resolving Trade-offs
References - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- Contents / List of figures / List of tables / Acronyms
1 Introduction
3 Bias on demand: a framework for generating synthetic data with bias
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
III Accounting for bias - 7 Addressing fairness in the banking sector
9 Towards fairness through time
IV Conclusions - FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
- 3. Use cases representing different image data types and their challenges
and status for sharing
- Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 2 A shift to user-centered realism in scientific contexts
3 Five specific goals and action-guiding strategies for ethical AI use in research practices - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- References
D Summary of themes and codes - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- 3 Detection
- Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
- 6 Concluding remarks
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 4 Results
- POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 State of the practice
4 The POLARIS framework - Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- 4. Proposed framework
- Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
5 Findings
References
Authors - Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Appendix
- How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- 4. Results
- I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Awareness in LLMs
5 Experiments
6 Conclusion
References
B Experimental Settings & Results - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methods
3 Results
References - User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Paradigm Shifts and New Trends
4 Current Taxonomy
5 Discussion and Future Research Directions - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- I. Introduction
V. Processual Elements
VII. Discussions - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Robustness of Free-Formed AI Collectives Against Risks
5. Open Challenges for Free-Formed AI Collectives
A. Cocktail Simulation
C. Public Good Simulation - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 CosmoAgent Simulation Setting
4 CosmoAgent Architecture
5 Evaluation - The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
- METRIC-framework for medical training data
Methods
References - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
- Abstract
1 The increasing importance of AI
2 The EU AI Act
4 There is no trustworthy AI without HCI
5 There is no community without common language and communication
References - Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 5 Results
6 Discussion
References - Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
- VI. AI and Learning Algorithms Statistics for
Autonomous Vehicles
- Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Suitability of Generative AI for Newsroom Tasks
References - FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 2. Background
4. Results
References - Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Related Work
3 Methodology
4 Results
5 Discussion
6 Conclusion
References - Implications of Regulations on the Use of AI and Generative AI for Human-Centered Responsible Artificial Intelligence / 2403.00148 / ISBN:https://doi.org/10.48550/arXiv.2403.00148 / Published by ArXiv / on (web) Publishing site
- 1 Motivation & Background
- The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Part 1. Study design
Part 2. A new train-test split for prompt development and few-shot learning
Part 4. Model evaluation
Table 1. Updated MI-CLAIM checklist for generative AI clinical studies. - A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- 2 AI Model Improvements with Human-AI Teaming
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- References
- Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- References
- Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
4. Methods
5. Results
6. Discussion and Conclusion
References - Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Analysis
References - Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Background
4 Results of the Systematic Literature Review - Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Cyber Defence - Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Findings
Reference - AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
- Key Gaps
- Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Methodology
Results - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Trustworthy AI Too Many Definitions or Lack Thereof?
4 AI Regulation: Current Global Landscape
9 A Few Suggestions for a Viable Path Forward
A Appendix
References - The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 3 Conceptualizing Fairness and Bias in ML
5 Ways to mitigate bias and promote Fairness
References - Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
- 2 Existing AI Evaluation Approaches
3 Blueprints for AI Evaluation in Journalism
References - Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
- 1 Introduction and Related Work
4 RQ2: How Do AI Ethics Discussions Unfold while Playing a Game Oriented toward Speculative Critique?
References - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
- 2 Non-discrimination law vs. algorithmic fairness
A Appendix - Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- 2 Applications of Large Language Models in Legal Tasks
4 Legal Problems of Large Languge Models
References - A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- 4 Specific Large Language Models
6 Model Tuning - Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background
4 Findings
5 Discussion
References - Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
- VI. Evaluation
- Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- Language Model Agents in Society
References - A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
- 4 Critical Survey
References - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift
4 Assurance
5 Governance
6 Conclusion
References - Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations / 2401.13605 / ISBN:https://doi.org/10.48550/arXiv.2401.13605 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Public Opinion on AI Governance
References - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Benefits and Risks of Generative Ghost
5 Discussion
References - Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
References - On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
- Ethical considera5ons
Sustainability considera5ons - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Methodology - Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
- III. Proposed Methodology
- Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
- 5 Embodied Enactive (Post-Cartesian) Perspectives on Cognition
8 The Troubling Implications of Legal Rationales for Robot Rights
9 The Enduring Irresponsibility of AI Rights Talk
References - Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
- 4 DECAI: Design-Enhanced Control of
AI Systems
References - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- 3 A Geo-Political AI Risk Taxonomy
4 European Union Artificial Intelligence Act - Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Results - Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 4 LLM Lifecycle
- The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
- 2 Audit the process, not just the product
3 3 Governance for safety - Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
- 2 Qualifying and Quantifying Emotions
References - From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap / 2404.13131 / ISBN:https://doi.org/10.1145/3630106.3658951 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Disentangling Replicability of Model Performance Claiim and Replicability of Social Claim
3 How Claim Replicability Helps Bridge the Responsiblity Gap
References - A Practical Multilevel Governance Framework for Autonomous and Intelligent Systems / 2404.13719 / ISBN:https://doi.org/10.48550/arXiv.2404.13719 / Published by ArXiv / on (web) Publishing site
- II. Comprehensive Governance of Emerging Technologies
IV. Application of the Framework for the Development of AIs - Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Mechanistic Agency: A Common View in AI Practice
4 Alternatives to AI as Agent
References - Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Discrimination in Law
III. Prevalent Algorithmic Fairness Definitions
IV. Criteria for the Selection of Fairness Methods - War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
- 4 Discussion
- Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Towards an Ethical and Inclusive Implementation of Artificial Intelligence in Organizations: A Multidimensional Framework / 2405.01697 / ISBN:https://doi.org/10.48550/arXiv.2405.01697 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Technocriticism and Key Actors in the Age of AI
2 How can organizations participate
3 Four Pillars for Implementing an Ethical Framework in Organizations
4 Conclusions - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 3 Finance
- AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
- References
- Responsible AI: Portraits with Intelligent Bibliometrics / 2405.02846 / ISBN:https://doi.org/10.48550/arXiv.2405.02846 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Conceptualization: Responsible AI
V. Discussion and Conclusions
References - Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
- 6 Unified Legal Framework
7 Conclusion - A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- Glossary of Terms
- Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theoretical Framework
3 Data and Methods
4 Results
5 Discussion and conclusions - Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- References
- RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Related Work
4 Method for Generating Responsible AI Guidelines
6 Discussion - Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
- References
- XXAI: Towards eXplicitly eXplainable Artificial Intelligence / 2401.03093 / ISBN:https://doi.org/10.48550/arXiv.2401.03093 / Published by ArXiv / on (web) Publishing site
- 4. Discussion of the problems of symbolic AI and ways to overcome them
- Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
- Introduction
Evaluating a system as a social actor
Social-interactional harms
Design implications for LLM agents
Informing existing HCI approaches
References - Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Between Human Intelligence and Technology: AGI’s Dual Value-Laden Pedigrees
3 The Motley Choices of AGI Discourse
4 Towards Contextualized, Politically Legitimate, and Social Intelligence
References
A Dimensions of AGI: a Summary - Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
- 6 Taxonomy of Harms
- Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- 5. What Is the Format of Human Feedback?
8. How Should We Account for Behavioral Aspects and Human Cognitive Structures?
Impact Statement
References - A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- 2 Materials
- Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Quantitative Models of Emotions, Behaviors, and Ethics
5 Conclusion
References
Appendix S: Multiple Adversarial LLMs
Appendix D: Complex Emotions - Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- 3 Pilot-testing: UN Policy Documents Thematic Analysis Supported by GPT
- When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
- 2 RQ1: What Happens When AI Eats Itself ?
- The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Motivations for Industry to Engage in Responsible AI Research
4 The Narrow Depth of Industry’s Responsible AI Research
7 Discussion
8 Conclusion
References
S2 Additional Analyses on Linguistic Analysis - Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- IV. Network Security
- Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
- Discussion
Reference
Additional material - The ethical situation of DALL-E 2 / 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
- 6 Technological mediation
7 Conclusion - Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- 2. Related Work
A. Appendix - There and Back Again: The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
- Abstract
Paper - Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Mitigating (Unfair) Bias
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO - Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
- 3 Framework
4 Discussion
5 Final Remarks
References
A DVC Dataset: Domestic Violence Cases
B PAC Dataset: Parental Alienation Cases
C Biases - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 3 Experimental Design, Overview, and Discussion
- How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
II. Risk Characteristics of LLMs
III. Impact of Alignment on LLMs’ Risk Preferences
IV. Impact of Alignments on Corporate Investment Forecasts
VI. Conclusions
References
Figures and tables - Evaluating AI fairness in credit scoring with the BRIO tool / 2406.03292 / ISBN:https://doi.org/10.48550/arXiv.2406.03292 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 ML model construction
4 Fairness violation analysis in BRIO
5 Risk assessment in BRIO
6 Risk analysis via BRIO for the German Credit Dataset
7 Revenue analysis - MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Benchmark and Method
References
Appendix - Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- 5. Results
- Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
- 3 Reductionism & Previous Research in Deceptive AI
- An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Background
3 Methodology
4 Findings
5 Discussion
6 Conclusions and Recommendations - The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Why Ethics Matter in LLM Attacks?
3 Potential Misuse and Security Concerns
5 Preemptive Ethical Measures
6 Ethical Response to LLM Attacks - Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- References
A Supplemental Tables - Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
- 3 Assuring fairness across the AI lifecycle
4 Assuring AI fairness in healthcare - Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Justice in Healthcare Artificial Intelligence in Africa / 2406.10653 / ISBN:https://doi.org/10.48550/arXiv.2406.10653 / Published by ArXiv / on (web) Publishing site
- 4. Prioritizing the Common Good Over Corporate Greed
Conclusion - Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
- 2 RELATED WORK
3 METHODOLOGY AND STUDY DESIGN
REFERENCES - AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Limitations of RLxF
4 The Internal Tensions and Ethical Issues in RLxF
5 Rebooting Safety and Alignment: Integrating AI Ethics and System Safety
6 Conclusion
References - SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
- II. UNDERSTANDING GENAI SECURITY
- Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
5. Freedom, equality, and self-determination in the iron cage
6. Conclusion
References - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry / 2407.05339 / ISBN:https://doi.org/10.48550/arXiv.2407.05339 / Published by ArXiv / on (web) Publishing site
- 5 Concluding remarks | Upfront investments vs. long-term benefits
- Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. AstraZeneca and AI governance
4. An ‘ethics-based’ AI audit
6. Lessons learned from AstraZeneca’s 2021 AI audit
REFERENCES - Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 3 The need to audit AI systems – a confluence of top-down and bottom-up pressures
4 Auditing of AI’s multidisciplinary foundations
5 In this topical collection - Why should we ever automate moral decision making? / 2407.07671 / ISBN:https://doi.org/10.48550/arXiv.2407.07671 / Published by ArXiv / on (web) Publishing site
- References
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- Table 1
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- II. Global Divide in AI Regulation: Horizontally. Context-Specific
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework - Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
- III. Method
- With Great Power Comes Great Responsibility: The Role of Software Engineers / 2407.08823 / ISBN:https://doi.org/10.48550/arXiv.2407.08823 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work - Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
- 2 Literature Review
3 Methodology - Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
- Applications of generative AI to health economic modeling
- Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Generative AI and Humans: Risks and Mitigation
6 Discussion
7 Recommendations: Fixing Gen AI’s Value Alignment
8 Conclusion
References - Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Next Steps for AI Biosecurity Evaluations
- Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges / 2407.13926 / ISBN:https://doi.org/10.48550/arXiv.2407.13926 / Published by ArXiv / on (web) Publishing site
- 1. Organizing the National AI Institutes for Ethical and Responsible Design
3. AI Institutes and Society - Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 4 Assurance for General-Purpose AI
References - Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Honest Computing reference specifications - RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- V. Benchmarking with Chat GPT4 Default Interface
References - Nudging Using Autonomous Agents: Risks and Ethical Considerations / 2407.16362 / ISBN:https://doi.org/10.48550/arXiv.2407.16362 / Published by ArXiv / on (web) Publishing site
- 4 Ethical Considerations
5 Principles for the Nudge Lifecycle
6 Conclusion - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- Impact Statement
References
A Appendix - Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
- 4 Findings
- Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
- 2 Background and Literature Review
4 ESG-AI framework
5 Discussion - AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 Large-Scale Surveys of AI in the Literature
5 Discussion
References
B Additional Materials for Pilot Survey - Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
- Abstract
2. Related Work
3. Proposed framework
7. Conclusion and Future Directions - Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
- 1 Preliminary notes
5 Investigating fundamental normative issues - Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
- I. The Why and How Behind LLMs
IV. The Path Ahead - Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- II. Generative AI
- VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
- 2 The Numbers of the Future
References - Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
- 3 Visualization Atlas Design Patterns
- Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- References
- Conference Submission and Review Policies to Foster Responsible Computing Research / 2408.09678 / ISBN:https://doi.org/10.48550/arXiv.2408.09678 / Published by ArXiv / on (web) Publishing site
- Financial Conflicts of Interest
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- 1. What is AI
1. Resistance Against AI Does Not Offer Conclusive Reasons for Outright Rejection
3. Arbitration Should Allow Flexible, Contract-Based Experimentation in a Fast- Evolving Regulatory Landscape - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 2. Background and Related Works
4. Experiment Results - The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
- Introduction
Related Work
References - Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- 1 Main
Tables - Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Operationalizable minimum requirements
4 Compliance and implementation of the suggested assessments
5 Overall Ethical Requirements (O)
6 Fairness (F)
7 Privacy and Data Protection (P)
8 Safety and Robustness (SR)
12 Conclucing Remarks - Dataset | Mindset = Explainable AI | Interpretable AI / 2408.12420 / ISBN:https://doi.org/10.48550/arXiv.2408.12420 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Database and Experimental Setup
References - Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 3 Multimodal Medical Studies
7 Challenges and Future Directions - What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- What Empathic Capabilities Do AIs Need?
- Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- 3 Governance for Human-Centric Intelligence
Systems
4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications
6 Open Challenges - A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
- 3 LLMs in Zero-Shot Biomedical Applications
- Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- 4. Risks and Caveats
References - AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
- Introduction
Background
Results
Discussion
References - Exploring AI Futures Through Fictional News Articles / 2409.06354 / ISBN:https://doi.org/10.48550/arXiv.2409.06354 / Published by ArXiv / on (web) Publishing site
- Reflections from two workshop participants
Discussion and conclusion - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- D. CausalRating: A Tool To Rate Sentiments Analysis Systems for Bias
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- B Cheatsheet Samples
- The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
- Annexus
- On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
- 3 Large Language Models and Boden’s Three Criteria
- Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
- Abstract
Part I Modelling - Machine learning, computer vision and processing 1 Machine learning and computer vision for Earth observation
5 Physics-aware machine learning
Part III Communicating - Machine-user interaction, trustworthiness & ethics 6 User-centric Earth observation
7 Earth observation and society: the growing relevance of ethics - LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
- 7 Results
- Why business adoption of quantum and AI technology must be ethical / 2312.10081 / ISBN:https://doi.org/10.48550/arXiv.2312.10081 / Published by ArXiv / on (web) Publishing site
- Argument by regulatory relevance
- Views on AI aren't binary -- they're plural / 2312.14230 / ISBN:https://doi.org/10.48550/arXiv.2312.14230 / Published by ArXiv / on (web) Publishing site
- The false binary: A note on language
The false binary: Ethics’s discontents with Alignment
The complex reality: Where Ethics and Alignment (actually) differ
References - Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
- Abstract
7 Data Privacy
8 Performance Evaluation
References - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
- 4 Challenges
- Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Six fallacies that misinterpret language models
- Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- I. Introduction
IV. Findings and Resultant Themes
V. Discussion - How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
- 3 Research Design
4 Results
5 Open Challenges and Future Research Directions (RQ5) - Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
- 2 Methodology
3 Result of Primary Analysis - Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 4 Results
References - ValueCompass: A Framework of Fundamental Values for Human-AI Alignment / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Designing ValueCompass: A Comprehensive Framework for Defining Fundamental Values in Alignment
4 Operationalizing ValueCompass: Methods to Measure Value Alignment of Humans and AI
5 Findings with ValueCompass: The Status Quo of Human-AI Value Alignment
6 Discussion
7 Conclusion
References - Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools / 2409.11489 / ISBN:https://doi.org/10.48550/arXiv.2409.11489 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Ethical Considerations in AI-Enabled Optimization
5 Conclusion
Appendix B Technical and Conceptual Details for the Power Systems Case Study - Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 2 Related Research
5 Discussion - Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations / 2409.13869 / ISBN:https://doi.org/10.48550/arXiv.2409.13869 / Published by ArXiv / on (web) Publishing site
- Mutual Impacts: Technology and Democracy
- Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
- 5 Challenges and Perspectives in Human-Level AI Development
6 Final Thoughts and Discussions
7 Conclusion - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Inherent problems of AI related to medicine - The Gradient of Health Data Privacy / 2410.00897 / ISBN:https://doi.org/10.48550/arXiv.2410.00897 / Published by ArXiv / on (web) Publishing site
- 3 The Health Data Privacy Gradient
- Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
IV. Results
References
APPENDIX A SELECTED STUDIES - Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
- IV. Proof of Concept I
VII. Evaluations and Experiments - DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Value-Based Framework on Moral Dilemmas
4 Daily Dilemmas: Dataset Analysis
5 Model Preference and Steerability on Daily Dilemmas
6 Conclusion
References - Application of AI in Credit Risk Scoring for Small Business Loans: A case study on how AI-based random forest model improves a Delphi model outcome in the case of Azerbaijani SMEs / 2410.05330 / ISBN:https://doi.org/10.48550/arXiv.2410.05330 / Published by ArXiv / on (web) Publishing site
- Ethical considerations
- AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
G Evaluation Experiment - DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Appendices
- From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / on (web) Publishing site
- The multiple levels of AI impact
The emerging social impacts of ChatGPT - The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
- III. Method
- Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems / 2410.10284 / ISBN:https://doi.org/10.48550/arXiv.2410.10284 / Published by ArXiv / on (web) Publishing site
- II. Related Work
V. Opportunities of AWS - Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- A. Appendix
- Study on the Helpfulness of Explainable Artificial Intelligence / 2410.11896 / ISBN:https://doi.org/10.48550/arXiv.2410.11896 / Published by ArXiv / on (web) Publishing site
- 4 Survey Results
- Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Previous studies
3 Overview of cultural safety
4 Cultural safety dataset
6 Main results on evaluation set
7 Cultural safeguarding
8 Results after cultural safeguarding
References - Is ETHICS about ethics? Evaluating the ETHICS benchmark / 2410.13009 / ISBN:https://doi.org/10.48550/arXiv.2410.13009 / Published by ArXiv / on (web) Publishing site
- 4 Poor quality of prompts and labels
References - How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / on (web) Publishing site
- References
- Data Defenses Against Large Language Models / 2410.13138 / ISBN:https://doi.org/10.48550/arXiv.2410.13138 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Ethics of Resisting LLM Inference - Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Works
3 Methodology PCJAILBREAK - A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Analysis
References
Appendices - Confrontation or Acceptance: Understanding Novice Visual Artists' Perception towards AI-assisted Art Creation / 2410.14925 / ISBN:https://doi.org/10.48550/arXiv.2410.14925 / Published by ArXiv / on (web) Publishing site
- 7 RQ3: The Stakeholder's Opinions Towards AI Tools
- Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / on (web) Publishing site
- II. Background and Concepts
V. Evaluation and Benchmarking
VII. Conclusion - Ethical AI in Retail: Consumer Privacy and Fairness / 2410.15369 / ISBN:https://doi.org/10.48550/arXiv.2410.15369 / Published by ArXiv / on (web) Publishing site
- 2.0 Literature Review
- Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Understanding Social Spaces: An Anthropological Ethics Approach
Taxonomies of Harm Must be Vernacularized to be Operationalized
Overgeneral Taxonomies Can Compound Potential Harms
Vernacularization as a General AI Safety Operationalization Methodology
References - Trustworthy XAI and Application / 2410.17139 / ISBN:https://doi.org/10.48550/arXiv.2410.17139 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Future of Trustworthy (XAI) - Ethical Leadership in the Age of AI Challenges, Opportunities and Framework for Ethical Leadership / 2410.18095 / ISBN:https://doi.org/10.48550/arXiv.2410.18095 / Published by ArXiv / on (web) Publishing site
- Introduction
Understanding Ethical Leadership
Ethical Challenges Presented by AI
Opportunities for Ethical Leadership in the age of AI
Framework for Ethical Leadership
Case Studies of Ethical Leadership in AI
Recommendations for Leaders
Conclusion
References - TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methods
4 Discussion
References
Appendices - My Replika Cheated on Me and She Liked It: A Taxonomy of Algorithmic Harms in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / on (web) Publishing site
- 4 Results
5 Discussion - The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Related Work
6 Conclusions
References - Standardization Trends on Safety and Trustworthiness Technology for Advanced AI / 2410.22151 / ISBN:https://doi.org/10.48550/arXiv.2410.22151 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Trends in advanced AI safety and trustworthiness standardization - Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Related Work
3 Interactive-Reflective Dialogue Alignment (IRDA) System
4 Study Design & Methodology
5 Results: Study 1 - Multi-Agent Apple Farming
7 Discussion
8 Conclusion
References
Appendices - Ethical Statistical Practice and Ethical AI / 2410.22475 / ISBN:https://doi.org/10.48550/arXiv.2410.22475 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / on (web) Publishing site
- Introduction
Defining Key Concepts
Theoretical Framework
Methodology
Discussion
Conclusion
References - Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
The Evolution of Responsible AI for Assessment - Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Findings
References
A Appendix - I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 4 Findings
5 Discussion
Appendices - Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Appendices