if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: mode
Bibliography items where occurs: 327
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Report highlights
Chapter 1 Reseach and Development
Chapter 2 Technical Performance
Chapter 3 Technical AI Ethics
Chapter 4 The Economy and Education
Chapter 5 AI Policy and Governance
Appendix - Exciting, Useful, Worrying, Futuristic:
Public Perception of Artificial Intelligence in 8 Countries / 2001.00081 / ISBN:https://doi.org/10.48550/arXiv.2001.00081 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
4 Findings
References
A Questionnaire - Selected Questions - Ethics of AI: A Systematic Literature Review of Principles and Challenges / 2109.07906 / ISBN:https://doi.org/10.48550/arXiv.2109.07906 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background
5 Detail results and analysis
7 Conclusions and future directions
References - AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
- Abstract
1Introduction
2 Related Work
4 Results
5 Discussion
6 Conclusion
References - The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
- 5 Evaluation of Ethical Principle Implementations
6 Gap Mitigation
References - A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
- 1. Problems with AI
2. Defining ethical AI - Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Methodology
4 Results
5 Discussion - ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Deployment and Evaluation
References - On the Current and Emerging Challenges of Developing Fair and Ethical AI Solutions in Financial Services / 2111.01306 / ISBN:https://doi.org/10.48550/arXiv.2111.01306 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 The Need forEthical AI in Finance
3 Practical Challengesof Ethical AI
4 Conclusions & Outlook
References - A primer on AI ethics via arXiv- focus 2020-2023 / Kaggle / Published by Kaggle / on (web) Publishing site
- Section 2: History and prospective
- What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
- 4 Proposed competency framework for responsible AI practitioners
5 Discussion
References
Appendix A supplementary material - GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Discussion
References
Materials and Methods - Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- II. Underlying Aspects
III. Interactions between Aspects
References - QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Question Bank: QB4AIRA
3 Evaluation - A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introductioon
6. Psychology of Trust
7. Propensity to Trust
8. Ethics and Trust Lenses in the Multilevel Framework
References - From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts / 2307.15452 / ISBN:https://doi.org/10.48550/arXiv.2307.15452 / Published by ArXiv / on (web) Publishing site
- 3. Results
References - The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practice, and Governance / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- 2. Theory
3. Methodology
4. Ethical Implications of AI Value Chains
5. Future Directions for Research, Practice, & Policy - Perceptions of the Fourth Industrial Revolution and Artificial Intelligence Impact on Society / 2308.02030 / ISBN:https://doi.org/10.48550/arXiv.2308.02030 / Published by ArXiv / on (web) Publishing site
- References
- Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
- 2 Clarifying Terminologies of Article-5: Insights from Behavioral Economics and Psychology
3 Enhancing Protection for the General Public and Vulnerable Groups
References - From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
What is Generative Artificial Intelligence?
Applications in Military Versus Healthcare
GREAT PLEA Ethical Principles for Generative AI in Healthcare
References - Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
System-role
Perturbation
Image-related
Generation-related
Bias and Discrimination of Training Data
References - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 Policy scope
4 Centralized regulation in the US context
5 Crowdsourced safety mechanism
7 Limitations - Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- 3 Taxonomy of ethical principles
4 Previous operationalisation of ethical principles
References
A Methodology - Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
- Introduction
Responsibility in War
Computers, Autonomy and Accountability
Moral Injury
Human Factors
References - The Future of ChatGPT-enabled Labor Market: A Preliminary Study / 2304.09823 / ISBN:https://doi.org/10.48550/arXiv.2304.09823 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Results
References - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
4 General Verification Framework
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor
8 Regulations and Ethical Use
9 Discussions
10 Conclusions
Reference - Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- Abstract
Keywords
1 Introduction
2 Background
3 LLM-based penetration testing
4 Discussion
5 A vision of AI-augmented pen-testing
6 Final ethical considerations
References - Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust / 2308.09979 / ISBN:https://doi.org/10.48550/arXiv.2308.09979 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Results
3 Discussion
References - Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related works
3 Targeted data augmentation
4 Experiments
5 Conclusions
References - Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming / 2308.11649 / ISBN:https://doi.org/10.48550/arXiv.2308.11649 / Published by ArXiv / on (web) Publishing site
- 2 Advancements in AI and Web-Based Programming
12 The Future Landscape: Creative AI Tools and Game-Based Methodologies in Education
13 Case Study Example: Learning Success with Creative AI and Game-Based Techniques
14 Conclusion & Discussion
References - Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work on Data Excellence
5 Results
References - Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Methods and training process of LLMs
III. Comprehensive review of state-of-the-art LLMs
IV. Applied and technology implications for LLMs
V. Market analysis of LLMs and cross-industry use cases
VI. Solution architecture for privacy-aware and trustworthy conversational AI
VII. Discussions
VIII. Conclusion
References
Appendix A industry-wide LLM usecases - The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
- 2 The evolution of artificial intelligence: from theory to general capabilities
3 Emerging dual-use risks and vulnerabilities in AI systems
4 Integrating red teaming, blue teaming, and ethics with violet teaming
5 Research directions in AI safety and violet teaming
6 A pathway for balanced AI innovation
7 Violet teaming to address dual-use risks of AI in biotechnology
10 Supplemental & additional details
References - Artificial Intelligence in Career Counseling: A Test Case with ResumAI / 2308.14301 / ISBN:https://doi.org/10.48550/arXiv.2308.14301 / Published by ArXiv / on (web) Publishing site
- 2 Literature review
4 Results and discussion - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 Theory and Method
4 Experiment
5 Conclusion
Limitations
References - The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
- Table of contents and index
Executive summary
1 Introduction
2 Key AI technology in financial services
3 Benefits of AI use in the finance sector
4 Threaths & potential pitfalls
5 Challenges
6 Regulation of AI and regulating through AI
7 Recommendations
References - Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Black box and lack of transparency
3 Bias and fairness
4 Human-centric AI
5 Ethical concerns and value alignment
References - The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related work
3. ChatGPT Training Process
4. Methods
5. Discussion
6. Conclusion
References - Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Contents
Introduction
Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 1 - 2 Appreciate Systems: Mimicking Styles
Part 1 - 3 Artistic Systems: Mimicking Inspiration
Part 2 Art Data and Human–Machine Interaction in Art Creation
Part 2 - 1 Biometric Signal Sensing Technologies and Emotion Data
Part 2 - 2 Motion Caputer Technologies and Motion Data
Part 2 - 3 Photogrammetry / Volumetric Capture
Part 2 - 4 Aesthetic Descriptor: Labelling Artefacts with Emotion
Part 3 Towards a Machine Artist Model
Part 3 - 1 Challenges in Endowing Machines with Creative Abilities
Part 3 - 2 Machine Artist Models
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework
Part 4 NFTs and the Future Art Economy
Part 5 Ethical AI and Machine Artist
Part 5 - 1 Authorship and Ownership of AI-generated Works of Artt
Part 5 - 2 Algorithmics Bias in Art Generation
Part 5 - 3 Democratization of Art with new Technologies
References
Acknowledgment - FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 2. Fairness - For Equitable AI in Medical Imaging
3. Universality - For Standardised AI in Medical Imaging
4. Traceability - For Transparent and Dynamic AI in Medical Imaging
5. Usability - For Effective and Beneficial AI in Medical Imaging
6. Robustness - For Reliable AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging
9. Discussion and Conclusion
References - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 The Cambridge Law Corpus
3 Legal and Ethical Considerations
4 Experiments
5 Conclusion
General References
C Case Outcome Task Description
E Topic Model Top Words
F Evaluation of GPT Models
Cambridge Law Corpus: Datasheet - EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Dataset Construction
4 Modeling Ethics
5 Experiments
6 Conclusions
Appendix
References - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- I. Introduction and Motivation
II. AI-Robotics Systems Architecture
III. Survey Approach & Taxonomy
IV. Attack Surfaces
VI. Human-Robot Interaction (HRI) Security Studies
VII. Future Research & Discussion
References - If our aim is to build morality into an artificial agent, how might we begin to go about doing so? / 2310.08295 / ISBN:https://doi.org/10.48550/arXiv.2310.08295 / Published by ArXiv / on (web) Publishing site
- 1 The Top-Down Approach Alone Might Be Insufficient
3 Proposing a Hybrid Approach
4 AI Governance Principles
References - Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work
3 Method
4 Taxonomy of AI Privacy Risks
5 Discussion
6 Conclusion
References - ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Nation-State Advances in AI-driven Information Operations
Theoretical Impact of LLMs on Information Operations
ClausewitzGPT and Modern Strategy
Mathematical Foundations
Ethical and Strategic Considerations: AI Mediators in the Age of LLMs
Integrating Computational Social Science, Computational Ethics, Systems Engineering, and AI Ethics in LLMdriven Operations
Looking Forward: ClausewitzGPT
References - The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Research Design and Methodology
3 Analysis and Findings
References - A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Literature Review
4. Implementing the Practical Use of Ethical AI Applications - A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- Abstract
I. INTRODUCTION
II. WHAT LLM S CAN DO FOR HEALTHCARE ? FROM FUNDAMENTAL TASKS TO ADVANCED APPLICATIONS
III. FROM PLM S TO LLM S FOR HEALTHCARE
IV. TRAIN AND USE LLM FOR HEALTHCARE
V. EVALUATION METHOD
VI. IMPROVING FAIRNESS , ACCOUNTABILITY, TRANSPARENCY, AND ETHICS
VII. FUTURE WORK AND CONCLUSION
REFERENCES - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models
3 The applications of STREAM
4 Conclusion and Future Work
References - Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 LLMs: Risk and Uncertainty
4 Scientific Expertise, Social Media and Regulatory Capture
Limitations
References - Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Systematic Review and Scientometric Analysis
5. Ethical Issues of AI and Robotics in AEC Industry
7. Future Research Direction
8. Conclusion
References - Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Background and Related Work
Pilot Study: Text SERPs with Ads
Evaluation of the Pilot Study
Ethics of GEnerating Native Ads
Conclusion
References - Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
- 2. The practice of multilateral negotiation and the mechanisms of compromises
- Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence / 2309.14617 / ISBN:https://doi.org/10.48550/arXiv.2309.14617 / Published by ArXiv / on (web) Publishing site
- Introduction
Method
A Unified Utilitarian Ethics Framework
Theory and Practical Implications
References - Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
- 3. Clinical Risks
4. Technical Risks
References - Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- Table of Contents
1. Introduction
2. Autonomous vehicles
4. Traffic Flow prediction in Autonomous vehicles
5. Cybersecurity Risks
6. Risk management
9. References - The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
- 4. A Holistic Framework
5. Discussion
6. References - An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Datasets and Methods
3 Results
4 Discussion
References - Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
- Trust in AI
Trust and AI Ethics Principles
Trust in AI as Socio-Technical Systems
References - In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice / 2309.10215 / ISBN:https://doi.org/10.48550/arXiv.2309.10215 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Definitions of Terms
4 Methodology
5 Relating Case Studies to Indigenous Data Sovereignty and CARE Principles
References - The Glamorisation of Unpaid Labour: AI and its Influencers / 2308.02399 / ISBN:https://doi.org/10.48550/arXiv.2308.02399 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Harms of Influencer Marketing
References - AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
- Abstract
2. AI and blockchain in education: An overview of the benefits and challenges
5. AI-powered assessment and evaluation
9. Challenges of AI and Blockchain in Teaching and Learning
11.References - Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 2. “Belief” in Humans and AI
3. Proposed Novel Topics in an Ethics of AI Belief
4. Nascent Extant Work that Falls Within the Ethics of AI Belief
References - Ensuring Trustworthy Medical Artificial Intelligence through Ethical and Philosophical Principles / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- Introduction
Ethical datasets and algorithm development guidelines
Towards solving key ethical challenges in Medical AI
Ethical guidelines for medical AI model deployment - Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Governance Patterns
4 Process Patterns
5 Product Patterns
6 Related Work
References - The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practice, and Governance / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- Bibliography
Appendix A: Integrated Inventory of Ethical Concerns, Value Chains Actors, Resourcing Activities, & Sampled Sources - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- METHODS
FUTURE-AI GUIDELINE
DISCUSSION - Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Agent Design
3 Agent Benchmark
4 Agent Performance
5 Related Work
6 Conclusion and Future Work
References
Appendix A Data Details
Appendix B Experiment Details - Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- Abstract
Contents
1 Introduction
2 AI feedback on specific problematic AI traits
3 Generalization from a Simple Good for Humanity Principle
4 Reinforcement Learning with Good-for-Humanity Preference Models
5 Related Work
6 Discussion
7 Contribution Statement
References
A Model Glossary
B Trait Preference Modeling
C General Prompts for GfH Preference Modeling
D Generalization to Other Traits
E Response Diversity and the Size of the Generating Model
G Over-Training on Good for Humanity
H Samples
I Responses on Prompts from PALMS, LaMDA, and InstructGPT - The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Method
4 Findings
5 Discussion
6 Conclusion
References - Systematic AI Approach for AGI:
Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Trifecta of AI Challenges
3 Systematic AI Approach for AGI
4 Systematic AI for Energy Wall
5 System Design for AI Alignment
6 System Insights from the Brain
References - AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Reinforcement Learning with Multiple Reinforcers
3 Arrow-Sen Impossibility Theorems for RLHF
4 Implications for AI Governance and Policy
5 Conclusion
References - A Comprehensive Review of
AI-enabled Unmanned Aerial Vehicle:
Trends, Vision , and Challenges / 2310.16360 / ISBN:https://doi.org/10.48550/arXiv.2310.16360 / Published by ArXiv / on (web) Publishing site
- III. UAV Platform Type
IV. Artificial Intelligence Embedded UAV
V. Challenges and Future Aspect on AI Enabled UAV
References - Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen
5 Conclusion
References - Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Causal Models
3 The BvH and HK Definitions
4 The Causal Condition
References
Appendix - AI for Open Science: A Multi-Agent Perspective for
Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 A Formal Language of AI for Open Science - Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a
focus group report / 2311.00903 / ISBN:https://doi.org/10.48550/arXiv.2311.00903 / Published by ArXiv / on (web) Publishing site
- Educational Challenges of Teaching AI Ethics in Cybersecurity and Core Ethical
Principles
AI tool-specific educational concerns
Broader educational preparedness for work in AI Cybersecurity
References - Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Contextual Concerns: Why AI Research Needs its Own Guidelines
3 Ethical Principles for AI Research with Human Participants
4 Principles in Practice: Guidelines for AI Research with Human Participants
References
A Evaluating Current Practices for Human-Participants Research
B Placing Research Ethics for Human Participants in Historical Context
C Defining the Scope of Research Participation in AI Research - LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 A General Theory of Meaning
3 The Meaning Model
4 The Moral Model
5 Conclusion
A Supplementary Material
References - Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and
Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
- Research questions
- Towards Effective Paraphrasing for Information
Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Methodology
References - Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 3 Measuring Fairness Metrics
4 Deontological AI Alignment
5 Aligning with Deontological Principles: Use Cases
6 Conclusion - Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Overview of ChatGPT and its capabilities
3 Transformers and pre-trained language models
4 Applications of ChatGPT in real-world scenarios
5 Advantages of ChatGPT in natural language processing
6 Limitations and potential challenges
7 Ethical considerations when using ChatGPT
8 Prompt engineering and generation
10 Future directions for ChatGPT in vision domain
References - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Sources of bias in AI
III. Impacts of bias in AI
IV. Mitigation strategies for bias in AI
V. Fairness in AI
VI. Mitigation strategies for fairness in AI
VII. Conclusions
References - Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Crafting an Ethical Dataset
4 A Multimodal Ethics Classifier
Acknowledgments and Disclosure of Funding - A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Introduction
III. Prehistoric prompting: pre NN-era
IV. History of NLP between 2010 and 2015: the pre-attention mechanism era
VI. 2015: birth of the transformer
VII. The second wave in 2017: rise of RL
VIII. The third wave 2018: the rise of transformers
IX. 2019: THE YEAR OF CONTROL
X. 2020-2021: the rise of LLMS
XI. 2022-current: beyond language generation
XII. Conclusions
References - Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related work
3 Method
4 Findings
5 Discussion
6 Conclusion
References - She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes
5 Conclusion
References - Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control / 2311.08943 / ISBN:https://doi.org/10.48550/arXiv.2311.08943 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Humans In, On, and Out-of-the-Loop
III. Safety
IV. Trust
V. Ethics
References - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Methodology
4 Experiments
5 Conclusion
Limitations
Ethical Considerations
References - Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 UnknownBench: Evaluating LLMs on the Unknown
3 Experiments
4 Related Work
5 Conclusion
References
A Limitations
B Confidence Elicitation Method Comparison
D Additional Results and Figures
E Prompt Templates - Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Chatbots Background and Scope of Research
3. Chatbot approaches overview: Taxonomy of existing methods
4. ChatGPT
5. Applications
6. Open chanllenges
7. Future Research Directions
8. Conclusion
References - Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- First, Do No Harm:
Algorithms, AI, and Digital Product Liability
Managing Algorithmic Harms Though Liability Law and Market Incentives / 2311.10861 / ISBN:https://doi.org/10.48550/arXiv.2311.10861 / Published by ArXiv / on (web) Publishing site
- Executive Summary
Why Liability Law?
Harms, Risk, and Liability Practices
Conclusion
Appendix A - What is an Algorithmic Harm? And a Bibliography - Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Proposed Process
3 Related Work and Discussion
References - Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Methods
4 Findings
5 Discussion and Recommendations
References
B Methodology - Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- 3 Study Design
4 Findings
5 Discussion
References
A Overview of AIIA Instruments
B Study Materials - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background
III. Approach: capturing and representing heuristics behind GPT's decision-making process
IV. Comparative results
V. Conclusion and future work
VI. Future work
References - Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
- Suggestions for More Meaningful Engagement with the Impact of RAI
Research
Concluding Reflections - Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Education and LLMS
III. Key technologies for EDULLMS
IV. LLM-empowered education
V. Key points in LLMSEDU
VI. Challenges and future directions
VII. Conclusion
References - The Rise of Creative Machines: Exploring the Impact of Generative AI / 2311.13262 / ISBN:https://doi.org/10.48550/arXiv.2311.13262 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Extent and impact of generative AI
IV. Risks of generative AI
V. Additional thoughts - Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 Methodology
4 Results and Discussion
5 Conclusion and Future Work
References
6 Appendix - Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Introduction
Research Method
Results
References - Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- Abstract
INTRODUCTION
OVERVIEW OF SOCIETAL BIASES IN GAI MODELS
FINDINGS - RAISE -- Radiology AI Safety, an End-to-end lifecycle approach / 2311.14570 / ISBN:https://doi.org/10.48550/arXiv.2311.14570 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Pre-Deployment phase
3. Production deployment monitoring phase
4. Post-market surveillance phase
5. Conclusion
Bibliography - Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 1. Introduction: The Role of Algorithms in Protecting Privacy
2. Case Study of the Bletchley Summit
3. Ethical considerations in AI decision-making
4. Addressing bias, transparency, and accountability
5. Ethical AI design principles and guidelines
9. Discussion on engaging stakeholders: fostering dialogue and collaboration between developers, users, and affected communities.
11. References - From deepfake to deep useful: risks and opportunities through a systematic literature review / 2311.15809 / ISBN:https://doi.org/10.48550/arXiv.2311.15809 / Published by ArXiv / on (web) Publishing site
- 2. Material and methods
4. Discussion - Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Very slowly then all-at-once
II. US Patent law
IV. Caveart emptor: no free ride for automation
V. Potential harms and mitigation
VII. Future considerations
References - Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Privacy and data protection
3 Transparency and explainability
4 Fairness and equity
5 Responsiblity, accountability, and regulations
6 Environmental impact
7 Conclusion
References - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background
III. The rise of large AI models
IV. Societal implications
V. Technical defense mechanisms
VI. Cross-platform strategies
VIII. Proposed integrated defense framework
IX. Discussion
References - Navigating Privacy and Copyright Challenges Across the Data Lifecycle of Generative AI / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Legal Basis of Privacy and Copyright Concerns over Generative AI
3 Mapping Challenges throughout the Data Lifecycle
4 Lifecycle Approaches
References - From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
- Related works
Community Engagement
References - Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms / 2312.04839 / ISBN:https://doi.org/10.48550/arXiv.2312.04839 / Published by ArXiv / on (web) Publishing site
- 3 Results
4 Conclusions
References - Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
5. Results - Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. The pitfalls in detecting generative AI output
3. Detectors are not useful
4. Teach critical usage of AI
5. Conclusion
Acknowledgements
References - Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Human intelligence
4 Bias, prejudice, and individuality
5 System design of intelligence
6 Measuring intelligence
7 Mathematically modeling intelligence
8 Consciousness
9 Augmenting human intelligence
10 Exceeding human intelligence
11 Control of intelligence
12 Large language models and Generative AI
14 Wrong numbers
15 Final thoughts
References - RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
- 2 Related work
4 Results & analysis
References - Ethical Considerations Towards Protestware / 2306.10019 / ISBN:https://doi.org/10.48550/arXiv.2306.10019 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction - Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Risks of Misuse for Artificial Intelligence in Science
3 Control the Risks of AI Models in Science
4 Call for Responsible AI for Science
5 Discussion
6 Related Works
References
Appendix A Assessing the Risks of AI Misuse in Scientific Research
Appendix B Details of Risks Demonstration in Chemical Science
Appendix C Detailed Implementation of SciGuard
Appendix D Details of Benchmark Results - Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
- Abstract
...
Study 1: Geo-cultural Differences in Offensiveness
Study 2: Moral Foundations of Offensiveness
Study 3: Implications for Responsible AI
General Discussion
Moral Factors
References - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Introduction
Literature
Problematizing The View Of GenAI Content As Academic Misconduct
The AI Assessment Scale
Conclusion
Conflict of Interest
References - Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
The concept of multiculturalism and its importance
Artificial intelligence – concept and ethical background
Culturally responsive AI – current landscape
Conclusion
References - Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Background and motivation
III. Research methodology
IV. Results
V. Discussion
References
Appendix A – Survey Questionnaire - Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Objective
2 Background and significance
3 Materials and methods
4 Results
5 Discussion
6 Conclusion
7 Acknowledgements
References
A Extended Survey Results
B Extended Guiding Principles
C Full survey questions - Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems / 2312.12559 / ISBN:https://doi.org/10.48550/arXiv.2312.12559 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Reign of Algorithmic Fairness
References - Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Experiments on Synthetic Data
4. Experiments on Human Data using Language Models
5. Discussion
References
A. Appendix - Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Theoretical background and hypotheses
III. Method
IV. Results
V. Discussion
VI. Conclusion
References - Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Introduction
Results
Discussion
Methods
References - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Autonomous threat hunting: conceptual framework
4. State-of-the-art AI techniques in autonomous threat hunting
5. Challenges in autonomous threat hunting
6. Case studies and applications
7. Evaluation metrics and performance benchmarks
8. Future directions and emerging trends
9. Conclusion
References - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. LLMs in cognitive and behavioral psychology
3. LLMs in clinical and counseling psychology
5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology
7. Challenges and future directions
8. Conclusion - Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. The generation of synthetic data
3. The usage of synthetic data
4. Risks and Challenges in Utilizing Synthetic Datasets for AI
5. Conclusions
References - MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Related work
III. Methodology: model development
IV. System design
V. Evaluation
VI. Discussion and future work
VII. Conclusion
References - AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- IV. Results
V. Discussion and suggestions
VI. Support mechanisms
References - Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- Abstract
Background and significance
Objective
Materials and methods
Results
Discussion
Conclusion - Resolving Ethics Trade-offs in Implementing Responsible AI / 2401.08103 / ISBN:https://doi.org/10.48550/arXiv.2401.08103 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Approaches for Resolving Trade-offs
III. Discussion and Recommendations
References - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- Abstract
Contents / List of figures / List of tables / Acronyms
1 Introduction
I Understanding bias - 2 Bias and moral framework in AI-based decision making
3 Bias on demand: a framework for generating synthetic data with bias
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
III Accounting for bias - 7 Addressing fairness in the banking sector
8 Fairview: an evaluative AI support for addressing fairness
9 Towards fairness through time
IV Conclusions
Bibliography - Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background
3 Method
6 Conclusion
References - FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 FAIR Data Principles: Theoretical Background and Significance
3 Data Management Challenges in Large Language Models
4 Framework for FAIR Data Principles Integration in LLM Development
5 Discussion
6 Conclusion
References
Appendices - Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
- 1. Motivation for White Paper
2. Background
3. Use cases representing different image data types and their challenges and status for sharing
4. Towards global image data sharing
Towards Global Image Data Sharing: A to-do list for various stakeholders
References - Five ethical principles for generative AI in scientific research / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Principle 1: Understand model training and output
Principle 2: Respect privacy, confidentiality, and copyright
Principle 3: Avoid plagiarism and policy violations
Principle 4: Apply AI beneficially
Principle 5: Use AI transparently and reproducibly
Concluding remarks
References - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- 2 Related work
4 RAI tool evaluation practices
5 Towards evaluation of RAI tool effectiveness
References
A List of RAI tools, with their primary publication
B RAI tools listed by target stage of AI development
C List of publications, with their associated RAI tools
D Summary of themes and codes - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Generation
3 Detection
4 Tools
5 Discussion
6 Conclusion
References
Authors' bios - Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Networking research today
3 Beyond technical dimensions
5 Possible next steps
References - Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related literature
4. Findings
5. Discussion
References - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Literature review
3. Methodology
References
C. ROSE: Tool and Data ResOurces to Explore the Instability of SEntiment Analysis Systems - Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Establishing the novel aspect of AI as a crossover technology
3 Moral and ethical obligations when developing crossover AI technology
4 Recommendations to address threats posed by crossover AI technology
References - (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related work and our approach
3 Methods: case-based expert deliberation
4 Results
References
A Provided AI response strategies and examples - POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 State of the practice
4 The POLARIS framework
5 POLARIS framework application
References - Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Intervention models from other fields
4. Proposed framework
5. The framework in practice
7. Conclusions and future work - Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
- 2 Background
4 Challenges, Threats and Limitations
5 Findings
References
Authors - Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Introduction
Methods
Discussion
Reference
Appendix - How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. AIcon2abs Instructional Unit
4. Results
5. Conclusion
References - I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Awareness in LLMs
5 Experiments
Ethical Statement
References
A AWAREEVAL Dataset Details
B Experimental Settings & Results - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Results
4 Discussion
References
Appendix A
Appendix C - Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Emerging Management-based AI Regulation
3 Management-based Regulation and Human-Guided Training
4 Techniques of Human-Guided Training
5 Advantages of Human-Guided Training
6 Limitations
7 Conclusion
References - User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Analysis of the Terminology
3 Paradigm Shifts and New Trends
4 Current Taxonomy
5 Discussion and Future Research Directions
References - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background and Related Work
III. Unified Evaluation Framework For LLM Benchmarks
IV. Technological Aspects
V. Processual Elements
VII. Discussions
VIII. Conclusion
References
Authors
Appendix A Examples of Benchmark Inadequacies in Technological Aspects
Appendix B Examples of Benchmark Inadequacies in Processual Elements
Appendix C Examples of Benchmark Inadequacies in Human Dynamics - Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 The AIGC Copyright Dilemma: A What-if Analysis
4 Case Study Under the Copyleft
5 Public Perception: A Survey Method
6 Implications and Recommendations
References - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Emergence of Free-Formed AI Collectives
3. Enhanced Performance of Free-Formed AI Collectives
4. Robustness of Free-Formed AI Collectives Against Risks
5. Open Challenges for Free-Formed AI Collectives
Impact Statements
References
A. Cocktail Simulation
C. Public Good Simulation - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 CosmoAgent Architecture
5 Evaluation
6 Experimental Design
7 Results
8 Conclusion
References - The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
- Introduction
Results
METRIC-framework for medical training data
References - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
- 2 The EU AI Act
3 There is no reliable AI regulation without a sound theory of human-AI interaction
4 There is no trustworthy AI without HCI
5 There is no community without common language and communication
References - Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
4 Analysis
5 Results
References - Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. The AI-Powered Development Life-Cycle in Autonomous Vehicles
III. Ethical Considerations and Bias in AI-Driven Software Development for Autonomous Vehicles
IV. AI’S Role in the Emerging Trend of Internet of Things (IOT) Ecosystem for Autonomous Vehicles
V. Review of Existing Research and Use Cases
VI. AI and Learning Algorithms Statistics for Autonomous Vehicles
VII. Conclusion
References - Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Suitability of Generative AI for Newsroom Tasks
3 Conclusion
References - FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Methods
References - Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
5 Discussion
References
B Toolkits Considered for Inclusion - Implications of Regulations on the Use of AI and Generative AI for Human-Centered Responsible Artificial Intelligence / 2403.00148 / ISBN:https://doi.org/10.48550/arXiv.2403.00148 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Motivation & Background
References - The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Abstract
Part 1. Study design
Part 2. A new train-test split for prompt development and few-shot learning
Part 3. Updates to baseline selection
Part 4. Model evaluation
Part 5. Interpretability of generative models
Part 6. End-to-end pipeline replication
Conclusions
Disclosures
Table 1. Updated MI-CLAIM checklist for generative AI clinical studies.
References - Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction & Motivation
II. Background & Literature Review
III. The AI-Enhanced CTI Processing Pipeline
IV. Challenges and Considerations
V. Conclusions & Future Research
References - A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI
5 Applications
6 Conclusion
References - Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- References
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- References
- Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- References
- How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- B Baseline Setup
D More Results - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- References
- AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. What is AGI
3. The Potentials of AGI in Transforming Future Education
4. Ethical Issues and Concerns
5. Discussion
References - Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Data
4. Methods
5. Results
References - Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
- Abstract
3. Analysis
4. Discussion
References - Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 Informational Fairness
6 Ethics and Morality
7 Use Cases and Applications
References - Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Results of the Systematic Literature Review
References - Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Attacking GenAI
3 Cyber Offense
4 Cyber Defence
5 Implications of Generative AI in Social, Legal, and Ethical Domains
References - Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Method
4 Experiment
5 Conclusion & Future Work
References - Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Findings
4. Discussion
5. Concluding Remarks and Future Directions
Reference - AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Results
AI Ethics Development Phases Based on Keyword Analysis
Key AI Ethics Issues
Key Gaps
Limitations and Conclusion
References
Authors bios - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Methodology
Data
Results
Conclusion
Web Appendix A: Analysis of the Disinformation Manipulations - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Context
2 Trustworthy AI Too Many Definitions or Lack Thereof?
3 Complexities and Challenges
4 AI Regulation: Current Global Landscape
5 Risk
6 Bias and Fairness
7 Explainable AI as an Enabler of Trustworthy AI
8 Implementation Framework
9 A Few Suggestions for a Viable Path Forward
10 Summary and Next Steps
11 About the Authors
A Appendix
References - Analyzing Potential Solutions Involving Regulation to Escape Some of AI's Ethical Concerns / 2403.15507 / ISBN:https://doi.org/10.48550/arXiv.2403.15507 / Published by ArXiv / on (web) Publishing site
- Various AI Ethical Concerns
A Possible Solution to These Concerns With Business Self-Regulation
References - The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Survey
3 Conceptualizing Fairness and Bias in ML
4 Practical cases of unfairness in real-world setting
5 Ways to mitigate bias and promote Fairness
6 How Users can be affected by unfair ML Systems
7 Challenges and Limitations
8 Conclusion
References - Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
- 1 Motivation
2 Existing AI Evaluation Approaches
3 Blueprints for AI Evaluation in Journalism
References - Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
- 1 Introduction and Related Work
2 Methods
3 RQ1: What Factors Influence Members’ “Licens to Critique” when Discussing AI Ethics with their Team?
5 Discussion
References - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Non-discrimination law vs. algorithmic fairness
3 Implications of the AI Act
References - AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
- Abstract
2. The implementation of the AI Act
3. The definition of artificial intelligence systems
5. Human Oversight
6. Large Language Models (LLMs) - Introduction
8. Conclusions
9. References - Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Applications of Large Language Models in Legal Tasks
3 Fine-Tuned Large Language Models in Various Countries and Regions
4 Legal Problems of Large Languge Models
5 Data Resources for Large Language Models in Law
6 Conclusion and Future Directions
References - A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 What is a Language Model?
3 Proprietary vs. Open Source LLMs
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models
6 Model Tuning
7 Model Evaluation and Benchmarking
8 Conclusions
References - Balancing Progress and Responsibility: A Synthesis of Sustainability Trade-Offs of AI-Based Systems / 2404.03995 / ISBN:https://doi.org/10.48550/arXiv.2404.03995 / Published by ArXiv / on (web) Publishing site
- I. Introduction
IV. Results
V. Discussion
References - Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
5 Discussion
References - Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Preliminaries
III. Proposed Design: IBIS
IV. Detailed Construction
V. Implementation on DAML
VI. Evaluation
VII. Conclusion
References - Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Generative Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
A Primer
Polarised Responses
Rebooting Machine Ethics
Generative Agents in Society
Conclusion
References - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Bibliography
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- A Appendices
- Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Authors
- A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
- 2 Background
4 Critical Survey
5 Three Patterns of Critique
6 Conclusion and Outlook
References
A Methodologies of Surveyed Literature - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift
4 Assurance
5 Governance
6 Conclusion
References - Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations / 2401.13605 / ISBN:https://doi.org/10.48550/arXiv.2401.13605 / Published by ArXiv / on (web) Publishing site
- Abstract
5 Research Questions
6 Results
7 Discussion
References - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Related Work
Generative Ghosts: A Design Space
Anticipating Benefits and Risks of Generative Ghosts
Discussion
Conclusion
References - Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 The Lower Status of Ethics Work within AI Cultures
3 Automated Model Cards: Legitimacy via Quantified Objectivity
6 Conclusions: Towards Humble Technical Practices
References - On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
- Background
Ethical considera5ons
Recommenda5ons
Conclusion
About the authors - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Related Work
3 Methodology
4 Evaluation
5 Conclusion
References - Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Literature Review
III. Proposed Methodology
V. Conclusion - Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 The Robots at Issue
4 The Machines Like us Argument: Mistaking the Map for the Territory
5 Embodied Enactive (Post-Cartesian) Perspectives on Cognition
6 Posthumanism
7 The Legal Perspective
Notes
References - Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background & Related Work
3 Scoping Review of Design Patterns, Affordances, and Harms in AI Interfaces
4 DECAI: Design-Enhanced Control of AI Systems
5 Case Studies
6 Discussion
References - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 A Geo-Political AI Risk Taxonomy
4 European Union Artificial Intelligence Act
5 Conclusion
References - Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Method
4 Discussin and Implications
References - Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Definition of LLM Supply Chain
3 LLM Infrastructure
4 LLM Lifecycle
5 Downstream Ecosystem
6 Conclusion
References - The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Audit the process, not just the product
3 3 Governance for safety
4 4 Auditing standards body, not standard audits
References - Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Qualifying and Quantifying Emotions
3 Case Study #1: Linguistic Features of Emotion
4 Qualifying and Quantifying Ethics
5 Concluding Remarks
References - From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap / 2404.13131 / ISBN:https://doi.org/10.1145/3630106.3658951 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Disentangling Replicability of Model Performance Claiim and Replicability of Social Claim
3 How Claim Replicability Helps Bridge the Responsiblity Gap
4 Claim Replicability's Practical Implication
5 Concluding Remarks
References - A Practical Multilevel Governance Framework for Autonomous and Intelligent Systems / 2404.13719 / ISBN:https://doi.org/10.48550/arXiv.2404.13719 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Comprehensive Governance of Emerging Technologies
IV. Application of the Framework for the Development of AIs
References - Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Mechanistic Agency: A Common View in AI Practice
3 Volitional Agency: an Alternative Approach
4 Alternatives to AI as Agent
References - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance / 2404.14660 / ISBN:https://doi.org/10.48550/arXiv.2404.14660 / Published by ArXiv / on (web) Publishing site
- 1 Technical assessments require an AI expert to complete —
and we don’t have enough experts
3 Substantive and Procedural Transparency are Necessary for Deploying Effective and Ethical AI systems - Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
- III. Prevalent Algorithmic Fairness Definitions
IV. Criteria for the Selection of Fairness Methods
References - War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background
3 Lessons from History: War Elephants
4 Discussion
5 Conclusions
References - Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Method
4 Findings
5 Discussion
7 Conclusion
References - Towards an Ethical and Inclusive Implementation of Artificial Intelligence in Organizations: A Multidimensional Framework / 2405.01697 / ISBN:https://doi.org/10.48550/arXiv.2405.01697 / Published by ArXiv / on (web) Publishing site
- 1 Technocriticism and Key Actors in the Age of
AI
2 How can organizations participate - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Surveys
3 Finance
4 Medicine and Healthcar
5 Law
6 Ethics
References - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Current State of AWS
3. AWS Proliferation and Threats to Academic Research
4. Policy Recommendations
Impact Statement
References - Responsible AI: Portraits with Intelligent Bibliometrics / 2405.02846 / ISBN:https://doi.org/10.48550/arXiv.2405.02846 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Conceptualization: Responsible AI
III. Data and Methodology
IV. Bibliometric Portraits of Responsible AI
V. Discussion and Conclusions
References
Authors - Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines / 2405.03153 / ISBN:https://doi.org/10.48550/arXiv.2405.03153 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 Results
5 Discussion
6 Conclusion
References - Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Motivation
3 Proposed Organizational Forms
4 Interaction Mechanisms
5 Governance and Organization
6 Unified Legal Framework
References - A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- Glossary of Terms
Executive Summary
1. Introduction
2. Methodology
3. A Spectrum of Scenarios of Open Data for Generative AI
4. Open Data Requirements And Diagnostic
5. Recommendations for Advancing Open Data in Generative AI
Appendix - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Theoretical Framework
3 Data and Methods
4 Results
5 Discussion and conclusions
References - Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Trustworthy AIGC in 6G Network
III. Adversarial of AIGC Models in 6G Network
IV. Privacy of AIGC in 6G Network
V. Fairness of AIGC in 6G Network
VI. Case Study
VII. Challenges and Future Research Directions
References - Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- References
- RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Method for Generating Responsible AI Guidelines
5 Evaluation of the 22 Responsible AI Guidelines
6 Discussion
7 Conclusion
References
B Mapping Guidelines with EU AI Act Articles - Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
5 Analyses of the Design Process
6 User’s Attitude on ChatGPT’s Qualitative Analysis Assistance: from no to yes
8 Limitations and Future Work
References - XXAI: Towards eXplicitly eXplainable Artificial Intelligence / 2401.03093 / ISBN:https://doi.org/10.48550/arXiv.2401.03093 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Overcoming the barriers to widespread use of symbolic AI
4. Discussion of the problems of symbolic AI and ways to overcome them
5. Conclusions and prospects
References - Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Social-interactional harms
Design implications for LLM agents
Conclusion
References - Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
- 3 The Motley Choices of AGI Discourse
4 Towards Contextualized, Politically Legitimate, and Social Intelligence
References - Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Overview of Speech Generation
4 Research Approach
5 Conceptual Framework
6 Taxonomy of Harms
7 Discussion
8 Conclusion
References
A Appendix - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Methodology
4. Experiments
5. Conclusion
References - Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Background
5. What Is the Format of Human Feedback?
6. How Do We Incorporate Diverse Individual Feedback?
7. Which Traditional Social-Choice-Theoretic Concepts Are Most Relevant?
8. How Should We Account for Behavioral Aspects and Human Cognitive Structures?
10. Conclusion
References - A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Materials
3 Results
4 Discussion
5 Conclusions
Appendix
References - Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Quantitative Models of Emotions, Behaviors, and Ethics
4 Pilot Studies
5 Conclusion
Limitations
References
Appendix S: Multiple Adversarial LLMs
Appendix A: Wheels of Emotions
Appendix C: Z. Sayre to F. S. Fitzgerald w/ Mixed Emotions
Appendix D: Complex Emotions - Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Coding in Thematic Analysis: Manual vs GPT-driven Approaches
3 Pilot-testing: UN Policy Documents Thematic Analysis Supported by GPT
4 Validation Using Topic Modeling
5 Discussion and Limitations
6 OpenAI Updates on Policies and Model Capabilities: Implications for Thematic Analysis
7 Conclusion
References - When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 RQ1: What Happens When AI Eats Itself ?
3 RQ2: What Technical Strategies Can Be Employed to Mitigate the Negative Consequences of AI Autophagy?
4 RQ3: Which Regulatory Strategies Can Be Employed to Address These Negative Consequences?
5 Conclusions and Outlook
6 Ethical Disclaimer and Acknowledgements
7 References - Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
- 2.MT Critical Errors
3. Data Compiling and Annotation
7. Bibliographical References - The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Literature on Industry’s Engagement in Responsible AI Research
3 Motivations for Industry to Engage in Responsible AI Research
4 The Narrow Depth of Industry’s Responsible AI Research
5 The Narrow Breadth of Industry’s Responsible AI Research
6 Limited Adoption of Responsible AI Research in Commercialization: Patent Citation Analysis
7 Discussion
References
S1 Additional Analyses on Engagement Analysis
S2 Additional Analyses on Linguistic Analysis - Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 The Audit Procedure
4 Conducting the Pilots
5 Lessons Learned from the Pilots
6 Conclusion and Outlook
References
C The Risk Assessment Database
D Lifecycle Mapping of Pilot 1
E Lifecycle Mapping of Pilot 2: The GARMI Vision Module - A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Threat Intelligence
III. Vulnerability Assessment
IV. Network Security
V. Privacy Preservation
VI. Awareness
VII. Cyber Security Operations Automation
VIII. Ethical LLMs
IX. Challenges and Open Problems
X. Conclusions
References - Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
- Main
Methods in clinical AI fairness research
Discussion
Reference
Additional material - The ethical situation of DALL-E 2 / 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Understanding what can DALL-E 2 actually do
4 Following the RRI, (Responsible research innovation) principles
5 Technology and society, a complex relationship
6 Technological mediation
References - The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Anticipated AI Use for Children
3. Discussion
Bibliography - Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
6. Discussion
References
A. Appendix - There and Back Again: The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
- Abstract
Paper
References - Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Mitigating (Unfair) Bias
3 Secure AI in EO: Focusing on Defense Mechanisms, Uncertainty Modeling and Explainability
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO
6 AI&EO for Social Good
7 Responsible AI Integration in Business Innovation and Sustainability
8 Conclusions, Remarks and Future Directions
References - Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Framework
4 Discussion
5 Final Remarks
Ethics Statement
References
C Biases - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
4 Comparative Analysis of Pre-Trained Models.
5 Discussion and further research
References - How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
I. Description of Method/Empirical Design
II. Risk Characteristics of LLMs
III. Impact of Alignment on LLMs’ Risk Preferences
IV. Impact of Alignments on Corporate Investment Forecasts
V. Robustness: Transcript Readability and Investment Score Predictability
VI. Conclusions
References
Figures and tables - Evaluating AI fairness in credit scoring with the BRIO tool / 2406.03292 / ISBN:https://doi.org/10.48550/arXiv.2406.03292 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Preliminary Analysis
3 ML model construction
4 Fairness violation analysis in BRIO
6 Risk analysis via BRIO for the German Credit Dataset
7 Revenue analysis
8 Conclusions
References - Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. A Case Study on DAIC-WoZ Depression Research
3. Related Work
4. Desiderata
References - MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Benchmark and Method
4 Experiments
5 Conclusion
References
Appendix - Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Bias Evaluation
4. Methodology
5. Results
6. Discussion
7. Conclusion
References
Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models - Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theories and Components of Deception
3 Reductionism & Previous Research in Deceptive AI
4 DAMAS: A MAS Framework for Deception Analysis
5 Conclusion
References - The Impact of AI on Academic Research and Publishing / 2406.06009 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Ethics of AI for Writing Papers
AI Policies Among Publishers
AI in Editorial Processes
Conclusion
References - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theoretical Background
3 Methodology
4 Findings
References - The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Why Ethics Matter in LLM Attacks?
3 Potential Misuse and Security Concerns
4 Towards Ethical Mitigation: A Proposed Methodology
5 Preemptive Ethical Measures
6 Ethical Response to LLM Attacks
References - Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Global Regulatory Landscape of AI
5 Generative AI: The New Frontier
7 Future Directions
References
A Supplemental Tables - Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Fairness and AI
3 Assuring fairness across the AI lifecycle
4 Assuring AI fairness in healthcare
References - Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 2 Background and related work
4 Results - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Foundations and Integration of SI and LLM
III. Federated LLMs for Smarm Intelligence
IV. Learned Lessons and Open Challenges
V. Conclusion
References - Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Selection of application
III. Analysis
IV. Conclusion
References
Aappendix A Societal aspects
Appendix B Legal aspects
Appendix C Algorithmic / technical aspects - Justice in Healthcare Artificial Intelligence in Africa / 2406.10653 / ISBN:https://doi.org/10.48550/arXiv.2406.10653 / Published by ArXiv / on (web) Publishing site
- 3. Ensuring Equitable Access to AI Technologies
5. Promoting Global Solidarity
6. Ensuring Sustainable AI Development
7. Addressing Bias and Enforcing Fairness
References - Conversational Agents as Catalysts for Critical Thinking: Challenging Design Fixation in Group Design / 2406.11125 / ISBN:https://doi.org/10.48550/arXiv.2406.11125 / Published by ArXiv / on (web) Publishing site
- Abstract
1 INTRODUCTION
2 BEYOND RECOMMENDATIONS: ENHANCING CRITICAL THINKING WITH GENERATIVE AI
3 CHALLENGES AND OPPORTUNITIES OF USING CONVERSATIONAL AGENTS IN GROUP DESIGN
4 POTENTIAL SCENARIO AND APPLICATIONS OF CONVERSATIONAL AGENTS IN GROUP DESIGN PROCESS
5 BALANCING CRITICAL THINKING WITH DESIGNER SATISFACTION AND MOTIVATION
REFERENCES - Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Large Language Model Risks
3 Strategies in Securing Large Language models
4 Challenges in Implementing Guardrails
5 Open Source Tools
7 Conclusion
References - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health / 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
- Abstract
I. INTRODUCTION
II. RECENT ADVANCEMENTS IN LARGE LANGUAGE MODELS
III. CASE STUDIES : APPLICATIONS OF LLM S IN PATIENT ENGAGEMENT
IV. DISCUSSION AND F UTURE D IRECTIONS
V. CONCLUSION
ACKNOWLEDGMENTS
REFERENCES - Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
- Abstract
1 INTRODUCTION
2 RELATED WORK
3 METHODOLOGY AND STUDY DESIGN
4 RESULTS
5 DISCUSSION AND IMPLICATIONS
6 THREATS TO VALIDITY
7 CONCLUSIONS
8 ACKNOWLEDGEMENTS
REFERENCES - AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 Limitations of RLxF
4 The Internal Tensions and Ethical Issues in RLxF
5 Rebooting Safety and Alignment: Integrating AI Ethics and System Safety
References - A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
- Abstract
I. INTRODUCTION AND MOTIVATION
II. BACKGROUND
III. ATTACKS ON DT-INTEGRATED AI ROBOTS
IV. DT-INTEGRATED ROBOTICS DESIGN CONSIDERATIONS AND DISCUSSION
V. CONCLUSION
ACKNOWLEDGEMENTS
REFERENCES - Staying vigilant in the Age of AI: From content generation to content authentication / 2407.00922 / ISBN:https://doi.org/10.48550/arXiv.2407.00922 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Art Practice: Human Reactions to Synthetic Fake Content
Emphasizing Reasoning Over Detection
Prospective Usage: Assessing Veracity in Everyday Content
Conclusions and Future Works
References - SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
- Abstract
I. INTRODUCTION
II. UNDERSTANDING GENAI SECURITY
III. CRITICAL ANALYSIS
IV. SECGENAI FRAMEWORK REQUIREMENTS SPECIFICATIONS
V. DISCUSSIONS AND RECOMMENDATIONS
REFERENCES - Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Artificial intelligence as Weberian rationalization
3. Bureaucratization, tax policy, and equality
4. AI-driven tax policy to reduce economic inequality: a thought experiment
5. Freedom, equality, and self-determination in the iron cage
6. Conclusion
References - A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Why audit generative AI systems?
3 How to audit generative AI systems?
4 Governance audits
5 Model audits
6 Application audits
7 Clarifications and limitations
8 Conclusion
Bibliography - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry / 2407.05339 / ISBN:https://doi.org/10.48550/arXiv.2407.05339 / Published by ArXiv / on (web) Publishing site
- 2 Case study | AstraZeneca’s AI governance journey
4 Discussion | Best practices and lessons learned
5 Concluding remarks | Upfront investments vs. long-term benefits
6 References - Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
- 2. The need to operationalise AI governance
6. Lessons learned from AstraZeneca’s 2021 AI audit
REFERENCES - Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 2 The evolution of auditing as a governance mechanism
3 The need to audit AI systems – a confluence of top-down and bottom-up pressures
4 Auditing of AI’s multidisciplinary foundations
References - Why should we ever automate moral decision making? / 2407.07671 / ISBN:https://doi.org/10.48550/arXiv.2407.07671 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Reasons for automated moral decision making - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- D. Results for Claude 3
- Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- References
- Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- REFERENCES
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- REFERENCES:
Table 1 - A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- AUTHORS
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- B Details of Instructions
C Experimental Details - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- E GPT Scoring Prompt
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Global Divide in AI Regulation: Horizontally. Context-Specific
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework
V. Conclusion - CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Background
4. Design Framework
7. Challenges and Opportunities
References - Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background and Definitions
III. Method
IV. Evolution of Affective Robots for Well-Being
V. 10 Years of Affectivbe Robotics
VI. Future Opportunities in Affective Robotivs for Well-Being
References - With Great Power Comes Great Responsibility: The Role of Software Engineers / 2407.08823 / ISBN:https://doi.org/10.48550/arXiv.2407.08823 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background and Related Work
3 Future Research Challenges
References - Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Literature Review
3 Methodology
4 Data Analysis and Results
5 Discussion
6 Conclusion
References - Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
A brief history of AI and generative AI
Applications of generative AI in literature reviews and evidence synthesis
Applications of generative AI to evidence generation
Applications of generative AI to clinical trials
Applications of generative AI to health economic modeling
Limitations of generative AI in HTA applications
Policy landscape
Conclusion
Glossary
Appendix
References - Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Giraffe and Acacia: Reciprocal Adaptations and Shaping
4 Generative AI and Humans: Risks and Mitigation
6 Discussion
7 Recommendations: Fixing Gen AI’s Value Alignment
8 Conclusion
References - Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Proposed Approach to Determining High-Consequence Biological Capabilities of Concern
Next Steps for AI Biosecurity Evaluations
References - Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction: Assurance for Traditional Systems
2 Assurance for Systems Extended with AI and ML
3 Assurance of AI Systems for Specific Functions
4 Assurance for General-Purpose AI
5 Assurance and Alignment for AGI
6 Summary and Conclusion
References - Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Key Challenges of Artificial Data
3. OAK Dataset
4. Automatic Prompt Generation
5. Use Considerations
6. Conclusion and Future Work
References
Appendices - Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Threat Model for Honest Computing
3. Honest Computing reference specifications
4. Discussion
References - RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background
III. Methodology
IV. Results
V. Benchmarking with Chat GPT4 Default Interface
VI. Discussion
VII. Conclusion
References - Nudging Using Autonomous Agents: Risks and Ethical Considerations / 2407.16362 / ISBN:https://doi.org/10.48550/arXiv.2407.16362 / Published by ArXiv / on (web) Publishing site
- 2 Technology Mediated Nudging
4 Ethical Considerations
References - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theoretical Lens: Expanding Views on Algorithmic Risks and Harms
3 Methods: Snowball and Structured Search
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models
5 Discussion: Grappling with the Scale and Interconnectedness of Foundation Models
6 Conclusion
Impact Statement
References
A Appendix - Navigating the United States Legislative Landscape on Voice Privacy: Existing Laws, Proposed Bills, Protection for Children, and Synthetic Data for AI / 2407.19677 / ISBN:https://doi.org/10.48550/arXiv.2407.19677 / Published by ArXiv / on (web) Publishing site
- 5. Regulations on Synthetic Data for AI
- Interactive embodied evolution for socially adept Artificial General Creatures / 2407.21357 / ISBN:https://doi.org/10.48550/arXiv.2407.21357 / Published by ArXiv / on (web) Publishing site
- Introduction
- Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Findings
References - Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Deepfake Detection
3. Deepfake Attribition and Recognition
4. Passive Deepfake Authentication Methods
5. Deepfakes Detection Method on Realistic Scenarios
6. Active Authentication
References - Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
- 2 Background and Literature Review
4 ESG-AI framework
5 Discussion
References - AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
5 Discussion and Implications
References - Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
5 Discussion
References
B Additional Materials for Pilot Survey - Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Proposed framework
4. Model architecture and training parameters
5. Model Training
6. Results
7. Conclusion and Future Directions
References - AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
- II. Related Work
III. Methodology
IV. Graphical User Interface (GUI)
V. Results
VI. Conclusion
References - Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
- 1 Preliminary notes
2 Clarifying conceptual ambiguities
3 Critical Reflection on AI Risks
5 Investigating fundamental normative issues
Bibliography - Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
I. The Why and How Behind LLMs
II. The Difference Between Academic and Commercial Research
III. A Guide for Data in LLM Research
IV. The Path Ahead
Conclusion - The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Methodology & Guidelines
3 Data Sources
4 Data Preparation
5 Data Documentation and Release
6 Model Training
7 Environmental Impact
8 Model Evaluation
9 Model Release & Monitoring
References
A Contributions - Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Generative AI
III. Language Modeling
IV. Challenges of Generative AI and LLMs
V. Bridging Research Gaps and Future Directions
References
Authors - VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
- Abstract
I Introduction
2 Related Works
3 Method
4 Experiment
5 Limitation and Future Work
6 Conclusion
References
Appendices - Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
- 2 The Numbers of the Future
3 Uncertainty Ex Machina
References - Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
- 3 Visualization Atlas Design Patterns
4 Interviews with Visualization Atlas Creators - Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Neuro-Symbolic AI
IV. Military Applications of Neuro-Symbolic AI
V. Challenges and Risks
VI. Interpretability and Explainability
VII. Conclusion
References - Conference Submission and Review Policies to Foster Responsible Computing Research / 2408.09678 / ISBN:https://doi.org/10.48550/arXiv.2408.09678 / Published by ArXiv / on (web) Publishing site
- Executive Summary
Accurate Reporting and Reproducibility
Use of Generative AI in CS Conference Publications
Ways to Incorporate Ethics Review into Publication Review Processes
The Growing Popularity of Preprint Archives
References - Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- Introduction
1. What is AI
3. Practical and Strategic Benefits of Using AI in Arbitration
1. Resistance Against AI Does Not Offer Conclusive Reasons for Outright Rejection
2. Let AI Grow Under Favorable Conditions: Avoiding Overly Moralistic Views
3. Arbitration Should Allow Flexible, Contract-Based Experimentation in a Fast- Evolving Regulatory Landscape - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Background and Related Works
3. Methodology
4. Experiment Results
5. Discussion and Future Works
6. Conclusion
References - The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
- Related Work
Methods
Findings
References - Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Main
2 Promises
3 Challenges
4 Needs
5 Conclusion and Future Directions
References
Tables - Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Operationalizable minimum requirements
4 Compliance and implementation of the suggested assessments
5 Overall Ethical Requirements (O)
6 Fairness (F)
7 Privacy and Data Protection (P)
8 Safety and Robustness (SR)
9 Sustainability (SU)
10 Transparency and Explainability (T)
11 Truthfulness (TR) - Dataset | Mindset = Explainable AI | Interpretable AI / 2408.12420 / ISBN:https://doi.org/10.48550/arXiv.2408.12420 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Literature Review
3. Database and Experimental Setup
4. Experiment Implementation, Results and Analysis
5. Results Discussion
References - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Related Work
III. Generative AI
IV. Attack Methodology
References - Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Preliminaries
3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions
References
Appendix - Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Methodology
4 Background
6 A Categorisation of XAI in Terms of Explanatory Goals
7 Case Studies: Closed-Loop and Semi-Closed-Loop Control
8 Instructions for Use & Discussion of Findings
9 Threats to Validity
References - What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- “Fine cuts” of Empathy: Capabilities and
Distinctions under the Empathy Umbrella
What Empathic Capabilities Do AIs Need?
Implications for AI Creators and Users
References - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Trustworthy and Responsible AI Definition
3 Governance for Human-Centric Intelligence Systems
4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications
6 Open Challenges
7 Guidelines and Recommendations
8 Conclusion and Final Remarks
Acknowledgments
References - A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field
5 Discussion
6 Conclusion
References - Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. The Experimentation Bottleneck
3. How GenAI Could Make a Difference
4. Risks and Caveats
5. Annoyances or Dealbreakers?
6. Conclusion
References - The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
- Mapping ethical challenges in complexity science
Practical considerations for ethical actions in complexity science
References - AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
- Background
Results
References - Preliminary Insights on Industry Practices for Addressing Fairness Debt / 2409.02432 / ISBN:https://doi.org/10.48550/arXiv.2409.02432 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Fairness Debt
3 Method
4 Findings
5 Discussions
6 Conclusions - DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Prior Benchmarks
3 Data Details
4 LLM Services (Infrastructure)
5 Prompting
6 Results
7 Limitations
9 Conclusion & Future Work
References
10 Appendix - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- D. CausalRating: A Tool To Rate Sentiments Analysis Systems for Bias
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- B Cheatsheet Samples
- Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- References
- The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
- Annexus