if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: text
Bibliography items where occurs: 363
- The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / on (web) Publishing site
- Report highlights
Chapter 2 Technical Performance
Chapter 3 Technical AI Ethics
Chapter 4 The Economy and Education
Appendix - Exciting, Useful, Worrying, Futuristic:
Public Perception of Artificial Intelligence in 8 Countries / 2001.00081 / ISBN:https://doi.org/10.48550/arXiv.2001.00081 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 Methodology
4 Findings
References - Ethics of AI: A Systematic Literature Review of Principles and Challenges / 2109.07906 / ISBN:https://doi.org/10.48550/arXiv.2109.07906 / Published by ArXiv / on (web) Publishing site
- 3 Research Method
References - AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
- References
- The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis / 2206.03225 / ISBN:https://doi.org/10.48550/arXiv.2206.03225 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Study Methodology
4 Evaluation of Ethical AI Principles
5 Evaluation of Ethical Principle Implementations
References - A Framework for Ethical AI at the United Nations / 2104.12547 / ISBN:https://doi.org/10.48550/arXiv.2104.12547 / Published by ArXiv / on (web) Publishing site
- 1. Problems with AI
2. Defining ethical AI
3. Implementing ethical AI - Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance / 2206.11922 / ISBN:https://doi.org/10.48550/arXiv.2206.11922 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Methodology
5 Discussion
6 Conclusion - Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society / 2001.04335 / ISBN:https://doi.org/10.48550/arXiv.2001.04335 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
References - ESR: Ethics and Society Review of Artificial Intelligence Research / 2106.11521 / ISBN:https://doi.org/10.48550/arXiv.2106.11521 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 The ESR Process
4 Deployment and Evaluation
References - On the Current and Emerging Challenges of Developing Fair and Ethical AI Solutions in Financial Services / 2111.01306 / ISBN:https://doi.org/10.48550/arXiv.2111.01306 / Published by ArXiv / on (web) Publishing site
- 3 Practical Challengesof Ethical AI
- A primer on AI ethics via arXiv- focus 2020-2023 / Kaggle / Published by Kaggle / on (web) Publishing site
- Section 1: Introduction and concept
Section 2: History and prospective
Section 3: Current trends 2020-2023
Appendix B: Data and charts from arXiv - What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Methodology
Appendix A supplementary material - GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
- Abstract
Results
Discussion
References
Materials and Methods - Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Underlying Aspects
III. Interactions between Aspects
References - QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Question Bank: QB4AIRA
3 Evaluation
4 Conclusion - A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / on (web) Publishing site
- 1. Introductioon
3. International and National Governance
4. Corporate Self-Governance
6. Psychology of Trust
8. Ethics and Trust Lenses in the Multilevel Framework
9. Virtue Ethics
References - From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts / 2307.15452 / ISBN:https://doi.org/10.48550/arXiv.2307.15452 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Method
4. Discussion and conclusion - The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Theory
3. Methodology
4. Ethical Implications of AI Value Chains
5. Future Directions for Research, Practice, & Policy
6. Conclusion - Perceptions of the Fourth Industrial Revolution and Artificial Intelligence Impact on Society / 2308.02030 / ISBN:https://doi.org/10.48550/arXiv.2308.02030 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
References - Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Clarifying Terminologies of Article-5: Insights from Behavioral Economics and Psychology
3 Enhancing Protection for the General Public and Vulnerable Groups
4 Conclusion - From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence / 2308.02448 / ISBN:https://doi.org/10.48550/arXiv.2308.02448 / Published by ArXiv / on (web) Publishing site
- Introduction
What is Generative Artificial Intelligence?
Applications in Military Versus Healthcare
References - Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment / 2308.02678 / ISBN:https://doi.org/10.48550/arXiv.2308.02678 / Published by ArXiv / on (web) Publishing site
- Introduction
System-role
Image-related
Hallucination
Generation-related - Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI / 2308.04448 / ISBN:https://doi.org/10.48550/arXiv.2308.04448 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 Policy scope
4 Centralized regulation in the US context
5 Crowdsourced safety mechanism
6 The dual governance framework
7 Limitations
8 Conclusion - Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Taxonomy of ethical principles
4 Previous operationalisation of ethical principles
5 Gaps in operationalising ethical principles
References - Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams / 2211.06326 / ISBN:https://doi.org/10.48550/arXiv.2211.06326 / Published by ArXiv / on (web) Publishing site
- Introduction
Responsibility in War
Computers, Autonomy and Accountability
Human Factors
AI Workplace Health and Safety Framework
References - The Future of ChatGPT-enabled Labor Market: A Preliminary Study / 2304.09823 / ISBN:https://doi.org/10.48550/arXiv.2304.09823 / Published by ArXiv / on (web) Publishing site
- 2 Results
5 Methods - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Large Language Models
3 Vulnerabilities, Attack, and Limitations
5 Falsification and Evaluation
6 Verification
7 Runtime Monitor
Reference - Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- 2 Background
5 A vision of AI-augmented pen-testing
References - Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust / 2308.09979 / ISBN:https://doi.org/10.48550/arXiv.2308.09979 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Results
3 Discussion
5 Supplementary material
References - Targeted Data Augmentation for bias mitigation / 2308.11386 / ISBN:https://doi.org/10.48550/arXiv.2308.11386 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Targeted data augmentation
References - AIxArtist: A First-Person Tale of Interacting with Artificial Intelligence to Escape Creative Block / 2308.11424 / ISBN:https://doi.org/10.48550/arXiv.2308.11424 / Published by ArXiv / on (web) Publishing site
- Abstract
- Exploring the Power of Creative AI Tools and Game-Based Methodologies for Interactive Web-Based Programming / 2308.11649 / ISBN:https://doi.org/10.48550/arXiv.2308.11649 / Published by ArXiv / on (web) Publishing site
- 4 Enhancing User Experience through Creative AI Tools
5 Engaging Web-Based Programming with Game-Based Approaches
7 Navigating Constraints: Limitations of Creative AI and GameBased Techniques
8 Real-World Applications: Showcasing Innovative Implementations
10 Privacy Concerns in Interactive Web-Based Programming for Education
11 Bias Awareness: Navigating AI-Generated Content in Education
12 The Future Landscape: Creative AI Tools and Game-Based Methodologies in Education
13 Case Study Example: Learning Success with Creative AI and Game-Based Techniques
14 Conclusion & Discussion - Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work on Data Excellence
4 Published Annotation Tasks and Datasets
5 Results
References - Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Methods and training process of LLMs
III. Comprehensive review of state-of-the-art LLMs
IV. Applied and technology implications for LLMs
V. Market analysis of LLMs and cross-industry use cases
VI. Solution architecture for privacy-aware and trustworthy conversational AI
References - The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The evolution of artificial intelligence: from theory to general capabilities
4 Integrating red teaming, blue teaming, and ethics with violet teaming
10 Supplemental & additional details
References - Artificial Intelligence in Career Counseling: A Test Case with ResumAI / 2308.14301 / ISBN:https://doi.org/10.48550/arXiv.2308.14301 / Published by ArXiv / on (web) Publishing site
- Abstract
3 Methods
4 Results and discussion - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Works
3 Theory and Method
4 Experiment
References - The AI Revolution: Opportunities and Challenges for the Finance Sector / 2308.16538 / ISBN:https://doi.org/10.48550/arXiv.2308.16538 / Published by ArXiv / on (web) Publishing site
- Table of contents and index
2 Key AI technology in financial services
3 Benefits of AI use in the finance sector
4 Threaths & potential pitfalls
5 Challenges
6 Regulation of AI and regulating through AI
References - Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Black box and lack of transparency
3 Bias and fairness
4 Human-centric AI
5 Ethical concerns and value alignment
6 Way forward
7 Conclusion
References - The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related work
3. ChatGPT Training Process
4. Methods
5. Discussion
6. Conclusion
References - Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Contents
Introduction
Part 1 - 1 Generatives Systems: Mimicking Artifacts
Part 1 - 2 Appreciate Systems: Mimicking Styles
Part 1 - 3 Artistic Systems: Mimicking Inspiration
Part 2 Art Data and Human–Machine Interaction in Art Creation
Part 2 - 3 Photogrammetry / Volumetric Capture
Part 2 - 5 Immersive Visualisation: Machine to Human Manifestations
Part 3 - 2 Machine Artist Models
Part 3 - 3 Comparison with Generative Models
Part 3 - 4 Demonstration of the Proposed Framework
Part 4 NFTs and the Future Art Economy
Part 5 Ethical AI and Machine Artist
Part 5 - 1 Authorship and Ownership of AI-generated Works of Artt
Part 5 - 2 Algorithmics Bias in Art Generation
References
Acknowledgment - FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 2. Fairness - For Equitable AI in Medical Imaging
3. Universality - For Standardised AI in Medical Imaging
4. Traceability - For Transparent and Dynamic AI in Medical Imaging
5. Usability - For Effective and Beneficial AI in Medical Imaging
6. Robustness - For Reliable AI in Medical Imaging
7. Explainability - For Enhanced Understanding of AI in Medical Imaging
9. Discussion and Conclusion
References - The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 The Cambridge Law Corpus
4 Experiments
General References
B Example XML case
C Case Outcome Task Description
D Case Outcome Annotation Instructions
F Evaluation of GPT Models
Cambridge Law Corpus: Datasheet - EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Dataset Construction
4 Modeling Ethics
5 Experiments
Appendix
References - Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities / 2310.08565 / ISBN:https://doi.org/10.48550/arXiv.2310.08565 / Published by ArXiv / on (web) Publishing site
- I. Introduction and Motivation
II. AI-Robotics Systems Architecture
IV. Attack Surfaces
V. Ethical & Legal Concerns
VI. Human-Robot Interaction (HRI) Security Studies - If our aim is to build morality into an artificial agent, how might we begin to go about doing so? / 2310.08295 / ISBN:https://doi.org/10.48550/arXiv.2310.08295 / Published by ArXiv / on (web) Publishing site
- Abstract
4 AI Governance Principles - Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks / 2310.07879 / ISBN:https://doi.org/10.48550/arXiv.2310.07879 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background and Related Work
3 Method
4 Taxonomy of AI Privacy Risks
5 Discussion
References - ClausewitzGPT Framework: A New Frontier in Theoretical Large Language Model Enhanced Information Operations / 2310.07099 / ISBN:https://doi.org/10.48550/arXiv.2310.07099 / Published by ArXiv / on (web) Publishing site
- Nation-State Advances in AI-driven
Information Operations
Theoretical Impact of LLMs on Information Operations
ClausewitzGPT and Modern Strategy
Looking Forward: ClausewitzGPT - The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Analysis and Findings
4 Discussion
References - A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 2. Literature Review
3. AI Ethical Principles
4. Implementing the Practical Use of Ethical AI Applications
References - A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- Abstract
I. INTRODUCTION
II. WHAT LLM S CAN DO FOR HEALTHCARE ? FROM FUNDAMENTAL TASKS TO ADVANCED APPLICATIONS
III. FROM PLM S TO LLM S FOR HEALTHCARE
IV. TRAIN AND USE LLM FOR HEALTHCARE
V. EVALUATION METHOD
VI. IMPROVING FAIRNESS , ACCOUNTABILITY, TRANSPARENCY, AND ETHICS
VII. FUTURE WORK AND CONCLUSION
REFERENCES - STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models
3 The applications of STREAM - Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
- 2 Regulation: A Short Introduction
3 LLMs: Risk and Uncertainty
4 Scientific Expertise, Social Media and Regulatory Capture
5 Regulation and NLP (RegNLP): A New Field
References - Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Systematic Review and Scientometric Analysis
5. Ethical Issues of AI and Robotics in AEC Industry
7. Future Research Direction
References - Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Background and Related Work
Pilot Study: Text SERPs with Ads
Evaluation of the Pilot Study
Ethics of GEnerating Native Ads
Conclusion
References - Compromise in Multilateral Negotiations and the Global Regulation of Artificial Intelligence / 2309.17158 / ISBN:https://doi.org/10.48550/arXiv.2309.17158 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. The practice of multilateral negotiation and the mechanisms of compromises
3. The liberal-sovereigntist multiplicity
4. Towards a compromise: drafting the normative hybridity
5. Text negotiations as normative testing
6. Conclusion
Notes
Bibliography
Annex 1. Text amendments and ambiguity - Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence / 2309.14617 / ISBN:https://doi.org/10.48550/arXiv.2309.14617 / Published by ArXiv / on (web) Publishing site
- Introduction
Why Ethics
Utilitarian Ethics
Results and Discussion - Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework / 2309.14530 / ISBN:https://doi.org/10.48550/arXiv.2309.14530 / Published by ArXiv / on (web) Publishing site
- 2. Methods for Comprehensive Review
3. Clinical Risks
4. Technical Risks
References - Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 2. Autonomous vehicles
5. Cybersecurity Risks
9. References - The Return on Investment in AI Ethics: A Holistic Framework / 2309.13057 / ISBN:https://doi.org/10.48550/arXiv.2309.13057 / Published by ArXiv / on (web) Publishing site
- 5. Discussion
- An Evaluation of GPT-4 on the ETHICS Dataset / 2309.10492 / ISBN:https://doi.org/10.48550/arXiv.2309.10492 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Datasets and Methods
4 Discussion - Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / on (web) Publishing site
- Introduction
Trust
Trust in AI
Different Types of Trust
Trust and AI Ethics Principles
Trust in AI as Socio-Technical Systems
Conclusion - In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice / 2309.10215 / ISBN:https://doi.org/10.48550/arXiv.2309.10215 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Definitions of Terms
5 Relating Case Studies to Indigenous Data Sovereignty and CARE Principles
References - The Glamorisation of Unpaid Labour: AI and its Influencers / 2308.02399 / ISBN:https://doi.org/10.48550/arXiv.2308.02399 / Published by ArXiv / on (web) Publishing site
- 2 Harms of Influencer Marketing
- AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
- 2. AI and blockchain in education: An overview of the benefits and
challenges
5. AI-powered assessment and evaluation
6. Blockchain-based decentralized learning networks
7. AI-powered content creation and curation - Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Proposed Novel Topics in an Ethics of AI Belief
4. Nascent Extant Work that Falls Within the Ethics of AI Belief - Ensuring Trustworthy Medical Artificial Intelligence through Ethical and Philosophical Principles / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- Ethical datasets and algorithm development guidelines
Towards solving key ethical challenges in Medical AI - Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering / 2209.04963 / ISBN:https://doi.org/10.48550/arXiv.2209.04963 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methodology
3 Governance Patterns
4 Process Patterns
5 Product Patterns
6 Related Work
7 Threats to Validity - The Ethics of AI Value Chains / 2307.16787 / ISBN:https://doi.org/10.48550/arXiv.2307.16787 / Published by ArXiv / on (web) Publishing site
- Bibliography
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- METHODS
FUTURE-AI GUIDELINE - Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Agent Design
3 Agent Benchmark
4 Agent Performance
5 Related Work
6 Conclusion and Future Work
References
Appendix A Data Details
Appendix B Experiment Details - Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- 2 AI feedback on specific problematic AI traits
4 Reinforcement Learning with Good-for-Humanity Preference Models
5 Related Work
References
H Samples
I Responses on Prompts from PALMS, LaMDA, and InstructGPT - The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Method
5 Discussion
References - Systematic AI Approach for AGI:
Addressing Alignment, Energy, and AGI Grand Challenges / 2310.15274 / ISBN:https://doi.org/10.48550/arXiv.2310.15274 / Published by ArXiv / on (web) Publishing site
- References
- AI Alignment and Social Choice: Fundamental
Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Arrow-Sen Impossibility Theorems for RLHF - A Comprehensive Review of
AI-enabled Unmanned Aerial Vehicle:
Trends, Vision , and Challenges / 2310.16360 / ISBN:https://doi.org/10.48550/arXiv.2310.16360 / Published by ArXiv / on (web) Publishing site
- IV. Artificial Intelligence Embedded UAV
References - Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen
References - Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Causal Models
3 The BvH and HK Definitions
4 The Causal Condition
6 Degree of Responsibility
Appendix - AI for Open Science: A Multi-Agent Perspective for
Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
4 Optimizing an Openness Metric in AI for Science
5 Why Openness in AI for Science
6 Conclusion and Future Work
References - Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a
focus group report / 2311.00903 / ISBN:https://doi.org/10.48550/arXiv.2311.00903 / Published by ArXiv / on (web) Publishing site
- Introduction
AI Ethics in Cybersecurity
Focus Group Protocol and Recruitment
Technical Issues - Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Contextual Concerns: Why AI Research Needs its Own Guidelines
III. Ethical Principles for AI Research with Human Participants
IV. Principles in Practice: Guidelines for AI Research with Human Participants
V. Conclusion
References
Appendix A Evaluating Current Practices for Human-Participants Research
Appendix C Defining the Scope of Research Participation in AI Research - LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 A General Theory of Meaning
3 The Meaning Model
4 The Moral Model
A Supplementary Material - Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and
Communication Requirements / 2311.04326 / ISBN:https://doi.org/10.48550/arXiv.2311.04326 / Published by ArXiv / on (web) Publishing site
- Literature Review
- Towards Effective Paraphrasing for Information
Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Methodology
5 Conclusion
References - Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Overview of Kantian Deontology
3 Measuring Fairness Metrics
4 Deontological AI Alignment
5 Aligning with Deontological Principles: Use Cases
6 Conclusion - Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Overview of ChatGPT and its capabilities
3 Transformers and pre-trained language models
4 Applications of ChatGPT in real-world scenarios
5 Advantages of ChatGPT in natural language processing
6 Limitations and potential challenges
7 Ethical considerations when using ChatGPT
8 Prompt engineering and generation
9 Future directions for ChatGPT and natural language processing
10 Future directions for ChatGPT in vision domain
References - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- II. Sources of bias in AI
IV. Mitigation strategies for bias in AI
V. Fairness in AI
VI. Mitigation strategies for fairness in AI - Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 A Multimodal Ethics Classifier - A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting) / 2310.04438 / ISBN:https://doi.org/10.48550/arXiv.2310.04438 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Introduction
III. Prehistoric prompting: pre NN-era
IV. History of NLP between 2010 and 2015: the pre-attention mechanism era
VI. 2015: birth of the transformer
VII. The second wave in 2017: rise of RL
VIII. The third wave 2018: the rise of transformers
IX. 2019: THE YEAR OF CONTROL
X. 2020-2021: the rise of LLMS
XI. 2022-current: beyond language generation
XII. Conclusions
References - Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Related work
3 Method
4 Findings
5 Discussion
6 Conclusion
References - She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Works
3 ReFLeCT: Robust, Fair, and Safe LLM Construction Test Suite
4 Empirical Evaluation and Outcomes
References - Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control / 2311.08943 / ISBN:https://doi.org/10.48550/arXiv.2311.08943 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Safety
IV. Trust
References - How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Methodology
References - Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Experiments
4 Related Work
References
B Confidence Elicitation Method Comparison
C Additional Details of the Dataset Construction
E Prompt Templates - Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Chatbots Background and Scope of Research
3. Chatbot approaches overview: Taxonomy of existing methods
4. ChatGPT
5. Applications
6. Open chanllenges
7. Future Research Directions - Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns / 2311.10165 / ISBN:https://doi.org/10.48550/arXiv.2311.10165 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Methodology
4 Findings
5 Discussion
7 Conclusion
References - First, Do No Harm:
Algorithms, AI, and Digital Product Liability
Managing Algorithmic Harms Though Liability Law and Market Incentives / 2311.10861 / ISBN:https://doi.org/10.48550/arXiv.2311.10861 / Published by ArXiv / on (web) Publishing site
- The Problem
Harms, Risk, and Liability Practices
Appendix A - What is an Algorithmic Harm? And a Bibliography - Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Proposed Process
3 Related Work and Discussion
References - Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background & Related Work
3 Methods
4 Findings
5 Discussion and Recommendations
6 Conclusion
References
B Methodology - Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- 3 Study Design
4 Findings
5 Discussion
References
A Overview of AIIA Instruments
B Study Materials
C Extended Results - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
III. Approach: capturing and representing heuristics behind GPT's decision-making process
V. Conclusion and future work
VI. Future work
References - Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
- Requiring adverse impact statements for RAI research is long overdue
Suggestions for More Meaningful Engagement with the Impact of RAI Research - Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- II. Education and LLMS
III. Key technologies for EDULLMS
IV. LLM-empowered education
V. Key points in LLMSEDU
References - The Rise of Creative Machines: Exploring the Impact of Generative AI / 2311.13262 / ISBN:https://doi.org/10.48550/arXiv.2311.13262 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Extent and impact of generative AI
III. Insights from top generative AI companies
References - Towards Auditing Large Language Models: Improving Text-based Stereotype Detection / 2311.14126 / ISBN:https://doi.org/10.48550/arXiv.2311.14126 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Works
3 Methodology
5 Conclusion and Future Work
References
6 Appendix - Ethical Implications of ChatGPT in Higher Education: A Scoping Review / 2311.14378 / ISBN:https://doi.org/10.48550/arXiv.2311.14378 / Published by ArXiv / on (web) Publishing site
- Introduction
- Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- Abstract
INTRODUCTION
OVERVIEW OF SOCIETAL BIASES IN GAI MODELS
FINDINGS
DISCUSSION
CONCLUSION - RAISE -- Radiology AI Safety, an End-to-end lifecycle approach / 2311.14570 / ISBN:https://doi.org/10.48550/arXiv.2311.14570 / Published by ArXiv / on (web) Publishing site
- 2. Pre-Deployment phase
- Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 3. Ethical considerations in AI decision-making
10. Conclusion - From deepfake to deep useful: risks and opportunities through a systematic literature review / 2311.15809 / ISBN:https://doi.org/10.48550/arXiv.2311.15809 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
References - Generative AI and US Intellectual Property Law / 2311.16023 / ISBN:https://doi.org/10.48550/arXiv.2311.16023 / Published by ArXiv / on (web) Publishing site
- III. US Copyright law
V. Potential harms and mitigation
VII. Future considerations - Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 3 Transparency and explainability
4 Fairness and equity
5 Responsiblity, accountability, and regulations
References - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models / 2311.17394 / ISBN:https://doi.org/10.48550/arXiv.2311.17394 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
III. The rise of large AI models
V. Technical defense mechanisms
References - Navigating Privacy and Copyright Challenges Across the Data Lifecycle of Generative AI / 2311.18252 / ISBN:https://doi.org/10.48550/arXiv.2311.18252 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Legal Basis of Privacy and Copyright Concerns over Generative AI
3 Mapping Challenges throughout the Data Lifecycle
References - From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
- Software system features
- Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms / 2312.04839 / ISBN:https://doi.org/10.48550/arXiv.2312.04839 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Methodology
3 Results
4 Conclusions - Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Research questions
3. Literature review
4. Method
5. Results
6. Discussion
7. Conclusion - Contra generative AI detection in higher education assessments / 2312.05241 / ISBN:https://doi.org/10.48550/arXiv.2312.05241 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. The pitfalls in detecting generative AI output
3. Detectors are not useful
4. Teach critical usage of AI
References - Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- 4 Bias, prejudice, and individuality
12 Large language models and Generative AI
15 Final thoughts
References - RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems / 2306.01774 / ISBN:https://doi.org/10.48550/arXiv.2306.01774 / Published by ArXiv / on (web) Publishing site
- 4 Results & analysis
- Ethical Considerations Towards Protestware / 2306.10019 / ISBN:https://doi.org/10.48550/arXiv.2306.10019 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
III. Ethics: a primer - Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Risks of Misuse for Artificial Intelligence in Science
3 Control the Risks of AI Models in Science
4 Call for Responsible AI for Science
6 Related Works
Appendix A Assessing the Risks of AI Misuse in Scientific Research
Appendix B Details of Risks Demonstration in Chemical Science
Appendix C Detailed Implementation of SciGuard - Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates / 2312.06861 / ISBN:https://doi.org/10.48550/arXiv.2312.06861 / Published by ArXiv / on (web) Publishing site
- Abstract
...
Data Collection
Moral Factors
References
A Appendix - The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- Introduction
Problematizing The View Of GenAI Content As Academic Misconduct
The AI Assessment Scale
Conclusion
Conflict of Interest
References - Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions / 2312.08467 / ISBN:https://doi.org/10.48550/arXiv.2312.08467 / Published by ArXiv / on (web) Publishing site
- Introduction
The concept of multiculturalism and its importance
Artificial intelligence – concept and ethical background
Culturally responsive AI – current landscape
Recommendations - Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- II. Background and motivation
III. Research methodology
Appendix A – Survey Questionnaire - Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Objective
2 Background and significance
3 Materials and methods
4 Results
5 Discussion
6 Conclusion
References
B Extended Guiding Principles
C Full survey questions - Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems / 2312.12559 / ISBN:https://doi.org/10.48550/arXiv.2312.12559 / Published by ArXiv / on (web) Publishing site
- 2 The Reign of Algorithmic Fairness
3 Taking a Step Forward
4 Limitations
5 Conclusion
References - Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Experiments on Synthetic Data
4. Experiments on Human Data using Language Models
References
A. Appendix - Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Theoretical background and hypotheses
V. Discussion
References - Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Introduction
Experimental Study
Results
Discussion
Methods
Supplementary Material - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence / 2401.00286 / ISBN:https://doi.org/10.48550/arXiv.2401.00286 / Published by ArXiv / on (web) Publishing site
- 2. Foundations of AI-driven threat intelligence
3. Autonomous threat hunting: conceptual framework
4. State-of-the-art AI techniques in autonomous threat hunting
5. Challenges in autonomous threat hunting
6. Case studies and applications
8. Future directions and emerging trends
9. Conclusion
References - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. LLMs in cognitive and behavioral psychology
3. LLMs in clinical and counseling psychology
5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology
7. Challenges and future directions
8. Conclusion - Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 2. The generation of synthetic data
3. The usage of synthetic data
References - MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Related work
III. Methodology: model development
IV. System design
V. Evaluation
VI. Discussion and future work
VII. Conclusion
References - AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- I. Introduction
III. Methods
IV. Results
V. Discussion and suggestions
VI. Support mechanisms
VII. Conclusion - Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- Materials and methods
Results
Discussion - Resolving Ethics Trade-offs in Implementing Responsible AI / 2401.08103 / ISBN:https://doi.org/10.48550/arXiv.2401.08103 / Published by ArXiv / on (web) Publishing site
- Abstract
II. Approaches for Resolving Trade-offs
III. Discussion and Recommendations
IV. Concluding Remarks - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
I Understanding bias - 2 Bias and moral framework in AI-based decision making
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
III Accounting for bias - 7 Addressing fairness in the banking sector
8 Fairview: an evaluative AI support for addressing fairness
9 Towards fairness through time
IV Conclusions
Bibliography - Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems / 2401.09473 / ISBN:https://doi.org/10.48550/arXiv.2401.09473 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 Method - FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 FAIR Data Principles: Theoretical Background and Significance
3 Data Management Challenges in Large Language Models
4 Framework for FAIR Data Principles Integration in LLM Development
References
Appendices - Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / on (web) Publishing site
- 2. Background
3. Use cases representing different image data types and their challenges and status for sharing
References - Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- Abstract
1 The “Triple-Too” problem of AI ethics
2 A shift to user-centered realism in scientific contexts
3 Five specific goals and action-guiding strategies for ethical AI use in research practices
References - A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related work
3 Methods
4 RAI tool evaluation practices
5 Towards evaluation of RAI tool effectiveness
6 Limitations
References
D Summary of themes and codes - Detecting Multimedia Generated by Large AI Models: A Survey / 2402.00045 / ISBN:https://doi.org/10.48550/arXiv.2402.00045 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Generation
3 Detection
4 Tools
5 Discussion
6 Conclusion
References - Responsible developments and networking research: a reflection beyond a paper ethical statement / 2402.00442 / ISBN:https://doi.org/10.48550/arXiv.2402.00442 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Networking research today
3 Beyond technical dimensions
4 Sense of engagement and responsibility
5 Possible next steps - Generative Artificial Intelligence in Higher Education: Evidence from an Analysis of Institutional Policies and Guidelines / 2402.01659 / ISBN:https://doi.org/10.48550/arXiv.2402.01659 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related literature
3. Research study
4. Findings
6. Conclusions, limitations, and future work
References - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Methodology
4. Discussion
References
C. ROSE: Tool and Data ResOurces to Explore the Instability of SEntiment Analysis Systems - Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
- 2 Establishing the novel aspect of AI as a crossover technology
4 Recommendations to address threats posed by crossover AI technology
References - (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 3 Methods: case-based expert deliberation
4 Results
5 Discussion
References - POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 State of the practice
4 The POLARIS framework
5 POLARIS framework application - Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Background
3. Intervention models from other fields
4. Proposed framework
6. Compliance with International Regulations - Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 Review Methodology
5 Findings
6 Discussion and Recommendations
7 Methodological Lessons Learned
A List of Included Studies
References - Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Introduction
Methods
Results
Discussion
Appendix - How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Research methodology and text structure
3. AIcon2abs Instructional Unit
4. Results
References - I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench / 2401.17882 / ISBN:https://doi.org/10.48550/arXiv.2401.17882 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Awareness in LLMs
4 Awareness Dataset: AWAREEVAL
References
A AWAREEVAL Dataset Details
B Experimental Settings & Results - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
References
Appendix A - Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence / 2402.08466 / ISBN:https://doi.org/10.48550/arXiv.2402.08466 / Published by ArXiv / on (web) Publishing site
- 2 Emerging Management-based
AI Regulation
4 Techniques of Human-Guided Training
5 Advantages of Human-Guided Training
6 Limitations - User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Analysis of the Terminology
3 Paradigm Shifts and New Trends
4 Current Taxonomy
5 Discussion and Future Research Directions
References - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background and Related Work
III. Unified Evaluation Framework For LLM Benchmarks
IV. Technological Aspects
VII. Discussions
Appendix A Examples of Benchmark Inadequacies in Technological Aspects
Appendix B Examples of Benchmark Inadequacies in Processual Elements
Appendix C Examples of Benchmark Inadequacies in Human Dynamics - Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Emergence of Free-Formed AI Collectives
3. Enhanced Performance of Free-Formed AI Collectives
5. Open Challenges for Free-Formed AI Collectives
References - What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 CosmoAgent Simulation Setting
4 CosmoAgent Architecture
7 Results
A CosmoAgent Prompt
B Secretary Agent Prompt - The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
- Results
METRIC-framework for medical training data
Methods
References - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success / 2402.14728 / ISBN:https://doi.org/10.48550/arXiv.2402.14728 / Published by ArXiv / on (web) Publishing site
- 1 The increasing importance of AI
2 The EU AI Act
3 There is no reliable AI regulation without a sound theory of human-AI interaction
4 There is no trustworthy AI without HCI - Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 Materials and Methods
5 Results
6 Discussion
References
Appendix 1 Scenarios
Appendix 2 Modified psychometric scales - Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms / 2402.17690 / ISBN:https://doi.org/10.48550/arXiv.2402.17690 / Published by ArXiv / on (web) Publishing site
- IV. AI’S Role in the Emerging Trend of Internet of
Things (IOT) Ecosystem for Autonomous Vehicles
- Envisioning the Applications and Implications of Generative AI for News Media / 2402.18835 / ISBN:https://doi.org/10.48550/arXiv.2402.18835 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Suitability of Generative AI for Newsroom Tasks
References - FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Background
3. Methods
4. Results
5. Discussion
References - Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 Methodology
5 Discussion
B Toolkits Considered for Inclusion - The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Part 1. Study design
Part 3. Updates to baseline selection
Part 4. Model evaluation
Part 5. Interpretability of generative models
Table 1. Updated MI-CLAIM checklist for generative AI clinical studies.
References - Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction & Motivation
II. Background & Literature Review
III. The AI-Enhanced CTI Processing Pipeline - A Survey on Human-AI Teaming with Large Pre-Trained Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 AI Model Improvements with Human-AI Teaming
3 Effective Human-AI Joint Systems
4 Safe, Secure and Trustworthy AI
5 Applications
References - Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- References
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- References
- How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / on (web) Publishing site
- B Baseline Setup
D More Results - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- References
- AGI Artificial General Intelligence for Education / 2304.12479 / ISBN:https://doi.org/10.48550/arXiv.2304.12479 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. What is AGI
3. The Potentials of AGI in Transforming Future Education
4. Ethical Issues and Concerns
5. Discussion
6. Conclusion
References - Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Related Work
3. Data
4. Methods
5. Results
6. Discussion and Conclusion
References - Responsible Artificial Intelligence: A Structured Literature Review / 2403.06910 / ISBN:https://doi.org/10.48550/arXiv.2403.06910 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Research Methodology
3. Analysis
6. Conclusion
References - Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies / 2403.08115 / ISBN:https://doi.org/10.48550/arXiv.2403.08115 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 Informational Fairness
5 Representational Fairness
6 Ethics and Morality
8 Conclusion
References - Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding the Development and Assessment of AI Systems / 2403.08624 / ISBN:https://doi.org/10.48550/arXiv.2403.08624 / Published by ArXiv / on (web) Publishing site
- 2 Theoretical Background
3 Research Methodology
4 Results of the Systematic Literature Review
6 Discussion and Limitations
References - Review of Generative AI Methods in Cybersecurity / 2403.08701 / ISBN:https://doi.org/10.48550/arXiv.2403.08701 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Cyber Offense
4 Cyber Defence
5 Implications of Generative AI in Social, Legal, and Ethical Domains
References - Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Method - Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Findings
4. Discussion
5. Concluding Remarks and Future Directions
Reference - AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps / 2403.14681 / ISBN:https://doi.org/10.48550/arXiv.2403.14681 / Published by ArXiv / on (web) Publishing site
- Results
AI Ethics Development Phases Based on Keyword Analysis
Key AI Ethics Issues
Key Gaps
References - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Introduction
Methodology
Results
Conclusion
Web Appendix A: Analysis of the Disinformation Manipulations - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Trustworthy AI Too Many Definitions or Lack Thereof?
3 Complexities and Challenges
4 AI Regulation: Current Global Landscape
5 Risk
6 Bias and Fairness
7 Explainable AI as an Enabler of Trustworthy AI
8 Implementation Framework
9 A Few Suggestions for a Viable Path Forward
References - Analyzing Potential Solutions Involving Regulation to Escape Some of AI's Ethical Concerns / 2403.15507 / ISBN:https://doi.org/10.48550/arXiv.2403.15507 / Published by ArXiv / on (web) Publishing site
- Introduction
References - The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Survey
3 Conceptualizing Fairness and Bias in ML
4 Practical cases of unfairness in real-world setting
6 How Users can be affected by unfair ML Systems
7 Challenges and Limitations
8 Conclusion
References - Domain-Specific Evaluation Strategies for AI in Journalism / 2403.17911 / ISBN:https://doi.org/10.48550/arXiv.2403.17911 / Published by ArXiv / on (web) Publishing site
- 2 Existing AI Evaluation Approaches
3 Blueprints for AI Evaluation in Journalism
References - Power and Play Investigating License to Critique in Teams AI Ethics Discussions / 2403.19049 / ISBN:https://doi.org/10.48550/arXiv.2403.19049 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction and Related Work
2 Methods
3 RQ1: What Factors Influence Members’ “Licens to Critique” when Discussing AI Ethics with their Team?
4 RQ2: How Do AI Ethics Discussions Unfold while Playing a Game Oriented toward Speculative Critique?
5 Discussion
6 Conclusion
References - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness / 2403.20089 / ISBN:https://doi.org/10.48550/arXiv.2403.20089 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Non-discrimination law vs. algorithmic fairness
3 Implications of the AI Act
4 Practical challenges for compliance - AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight / 2404.00600 / ISBN:https://doi.org/10.48550/arXiv.2404.00600 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. The definition of artificial intelligence systems
5. Human Oversight
6. Large Language Models (LLMs) - Introduction
7. Artificial intelligence Liability - Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey / 2404.00990 / ISBN:https://doi.org/10.48550/arXiv.2404.00990 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Applications of Large Language Models in Legal Tasks
3 Fine-Tuned Large Language Models in Various Countries and Regions
4 Legal Problems of Large Languge Models
5 Data Resources for Large Language Models in Law
References - A Review of Multi-Modal Large Language and Vision Models / 2404.01322 / ISBN:https://doi.org/10.48550/arXiv.2404.01322 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 What is a Language Model?
4 Specific Large Language Models
5 Vision Models and Multi-Modal Large Language Models
6 Model Tuning
7 Model Evaluation and Benchmarking
References - Balancing Progress and Responsibility: A Synthesis of Sustainability Trade-Offs of AI-Based Systems / 2404.03995 / ISBN:https://doi.org/10.48550/arXiv.2404.03995 / Published by ArXiv / on (web) Publishing site
- Abstract
III. Study Design
IV. Results
V. Discussion
VI. Threats to Validity - Designing for Human-Agent Alignment: Understanding what humans want from their agents / 2404.04289 / ISBN:https://doi.org/10.1145/3613905.3650948 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Method
4 Findings
5 Discussion
References - Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
- V. Implementation on DAML
VI. Evaluation
References - Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Generative Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- Introduction
A Primer
Rebooting Machine Ethics
Generative Agents in Society
References - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Bibliography
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- A Appendices
- A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Methodology
4 Critical Survey
6 Conclusion and Outlook
References
A Methodologies of Surveyed Literature - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift
4 Assurance
5 Governance
6 Conclusion
References - Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations / 2401.13605 / ISBN:https://doi.org/10.48550/arXiv.2401.13605 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Public Opinion on AI Governance
5 Research Questions
6 Results
7 Discussion
8 Conclusion
References - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Generative Ghosts: A Design Space
4 Benefits and Risks of Generative Ghost
5 Discussion
6 Conclusion
References - Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Automated Model Cards: Legitimacy via Quantified Objectivity
4 Grey Skin as Technofix: Failures to Lodge Located Complaints
5 Alternative AI Ethics: Space for Embodied Complaints
6 Conclusions: Towards Humble Technical Practices
References - On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
- Background
Ethical considera5ons
Sustainability considera5ons
Recommenda5ons - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Related Work
3 Methodology
References - Detecting AI Generated Text Based on NLP and Machine Learning Approaches / 2404.10032 / ISBN:https://doi.org/10.48550/arXiv.2404.10032 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Literature Review
III. Proposed Methodology
IV. Results and Discussion
V. Conclusion
References - Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / on (web) Publishing site
- 6 Posthumanism
8 The Troubling Implications of Legal Rationales for Robot Rights
9 The Enduring Irresponsibility of AI Rights Talk
References - Characterizing and modeling harms from interactions with design patterns in AI interfaces / 2404.11370 / ISBN:https://doi.org/10.48550/arXiv.2404.11370 / Published by ArXiv / on (web) Publishing site
- 3 Scoping Review of Design Patterns,
Affordances, and Harms in AI
Interfaces
4 DECAI: Design-Enhanced Control of AI Systems
5 Case Studies
6 Discussion
References - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- 2 EU Public Policy Analysis
3 A Geo-Political AI Risk Taxonomy
4 European Union Artificial Intelligence Act
References - Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Method
3 Results
4 Discussin and Implications
5 Limitations
References - Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 LLM Infrastructure
4 LLM Lifecycle
5 Downstream Ecosystem
References - The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 3 Governance for safety
4 4 Auditing standards body, not standard audits - Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Qualifying and Quantifying Emotions
3 Case Study #1: Linguistic Features of Emotion
4 Qualifying and Quantifying Ethics
References - From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap / 2404.13131 / ISBN:https://doi.org/10.1145/3630106.3658951 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Disentangling Replicability of Model Performance Claiim and Replicability of Social Claim
3 How Claim Replicability Helps Bridge the Responsiblity Gap
References - A Practical Multilevel Governance Framework for Autonomous and Intelligent Systems / 2404.13719 / ISBN:https://doi.org/10.48550/arXiv.2404.13719 / Published by ArXiv / on (web) Publishing site
- I. Introduction
- Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis / 2404.13861 / ISBN:https://doi.org/10.48550/arXiv.2404.13861 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Mechanistic Agency: A Common View in AI Practice
3 Volitional Agency: an Alternative Approach
4 Alternatives to AI as Agent
References - Designing Safe and Engaging AI Experiences for Children: Towards the Definition of Best Practices in UI/UX Design / 2404.14218 / ISBN:https://doi.org/10.48550/arXiv.2404.14218 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance / 2404.14660 / ISBN:https://doi.org/10.48550/arXiv.2404.14660 / Published by ArXiv / on (web) Publishing site
- 1 Technical assessments require an AI expert to complete —
and we don’t have enough experts
- Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Discrimination in Law
V. Discussion - War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
- 2 Background
3 Lessons from History: War Elephants
4 Discussion
5 Conclusions
References - Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Method
4 Findings
5 Discussion
7 Conclusion
References - Towards an Ethical and Inclusive Implementation of Artificial Intelligence in Organizations: A Multidimensional Framework / 2405.01697 / ISBN:https://doi.org/10.48550/arXiv.2405.01697 / Published by ArXiv / on (web) Publishing site
- 2 How can organizations participate
- A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Surveys
3 Finance
4 Medicine and Healthcar
5 Law
6 Ethics
References - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research / 2405.01859 / ISBN:https://doi.org/10.48550/arXiv.2405.01859 / Published by ArXiv / on (web) Publishing site
- 2. Current State of AWS
4. Policy Recommendations - Responsible AI: Portraits with Intelligent Bibliometrics / 2405.02846 / ISBN:https://doi.org/10.48550/arXiv.2405.02846 / Published by ArXiv / on (web) Publishing site
- IV. Bibliometric Portraits of Responsible AI
V. Discussion and Conclusions
References
Authors - Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines / 2405.03153 / ISBN:https://doi.org/10.48550/arXiv.2405.03153 / Published by ArXiv / on (web) Publishing site
- 4 Results
5 Discussion
References - Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence / 2405.03825 / ISBN:https://doi.org/10.48550/arXiv.2405.03825 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Proposed Organizational Forms
4 Interaction Mechanisms
5 Governance and Organization
References - A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI / 2405.04333 / ISBN:https://doi.org/10.48550/arXiv.2405.04333 / Published by ArXiv / on (web) Publishing site
- Glossary of Terms
Executive Summary
1. Introduction
3. A Spectrum of Scenarios of Open Data for Generative AI
4. Open Data Requirements And Diagnostic
5. Recommendations for Advancing Open Data in Generative AI
Appendix - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media / 2405.04706 / ISBN:https://doi.org/10.48550/arXiv.2405.04706 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Theoretical Framework
3 Data and Methods
4 Results
5 Discussion and conclusions
References - Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness / 2405.05930 / ISBN:https://doi.org/10.48550/arXiv.2405.05930 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Trustworthy AIGC in 6G Network
V. Fairness of AIGC in 6G Network
VI. Case Study
References - RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles / 2307.15158 / ISBN:https://doi.org/10.48550/arXiv.2307.15158 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Method for Generating Responsible AI Guidelines
5 Evaluation of the 22 Responsible AI Guidelines
6 Discussion
References
B Mapping Guidelines with EU AI Act Articles - Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis / 2309.10771 / ISBN:https://doi.org/10.48550/arXiv.2309.10771 / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Users’ Experiences and Challenges with ChatGPT
5 Analyses of the Design Process
7 Discussion
8 Limitations and Future Work
9 Conclusion
References
A Appendix - XXAI: Towards eXplicitly eXplainable Artificial Intelligence / 2401.03093 / ISBN:https://doi.org/10.48550/arXiv.2401.03093 / Published by ArXiv / on (web) Publishing site
- 1.
Introduction
References - Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
- Introduction
Evaluating a system as a social actor
Social-interactional harms
Informing existing HCI approaches - Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Between Human Intelligence and Technology: AGI’s Dual Value-Laden Pedigrees
3 The Motley Choices of AGI Discourse
4 Towards Contextualized, Politically Legitimate, and Social Intelligence
5 Conclusion: Politically Legitimate Intelligence - Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators / 2402.01708 / ISBN:https://doi.org/10.48550/arXiv.2402.01708 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
3 Overview of Speech Generation
References
A Appendix - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative / 2402.14859 / ISBN:https://doi.org/10.48550/arXiv.2402.14859 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Methodology
4. Experiments
References - Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Background
3. What Are the Collective Decision Problems and their Alternatives in this Context?
5. What Is the Format of Human Feedback?
7. Which Traditional Social-Choice-Theoretic Concepts Are Most Relevant?
8. How Should We Account for Behavioral Aspects and Human Cognitive Structures?
Impact Statement
References - A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs) / 2405.03066 / ISBN:https://doi.org/10.48550/arXiv.2405.03066 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Materials
3 Results
4 Discussion
5 Conclusions
Appendix
References - Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Quantitative Models of Emotions, Behaviors, and Ethics
4 Pilot Studies
Limitations
Appendix S: Multiple Adversarial LLMs
Appendix B: Polarized Emotions in One Article
Appendix D: Complex Emotions
Appendix E: “To My Sister” of Different Linguistic Behaviors - Using ChatGPT for Thematic Analysis / 2405.08828 / ISBN:https://doi.org/10.48550/arXiv.2405.08828 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Coding in Thematic Analysis: Manual vs GPT-driven Approaches
3 Pilot-testing: UN Policy Documents Thematic Analysis Supported by GPT
4 Validation Using Topic Modeling
5 Discussion and Limitations
6 OpenAI Updates on Policies and Model Capabilities: Implications for Thematic Analysis
7 Conclusion
9 Appendix
References - When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 RQ1: What Happens When AI Eats Itself ?
3 RQ2: What Technical Strategies Can Be Employed to Mitigate the Negative Consequences of AI Autophagy?
4 RQ3: Which Regulatory Strategies Can Be Employed to Address These Negative Consequences?
5 Conclusions and Outlook
7 References - Cyber Risks of Machine Translation Critical Errors : Arabic Mental Health Tweets as a Case Study / 2405.11668 / ISBN:https://doi.org/10.48550/arXiv.2405.11668 / Published by ArXiv / on (web) Publishing site
- 1.
Introduction
2.MT Critical Errors
3. Data Compiling and Annotation
4.Error Analysis
5.Quality Metrics Performance
7. Bibliographical References - The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
- 2 Related Literature on Industry’s Engagement in Responsible AI
Research
4 The Narrow Depth of Industry’s Responsible AI Research
5 The Narrow Breadth of Industry’s Responsible AI Research
7 Discussion
References
S1 Additional Analyses on Engagement Analysis
S2 Additional Analyses on Linguistic Analysis - Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 The Audit Procedure
4 Conducting the Pilots
A Standard Terminology
D Lifecycle Mapping of Pilot 1
E Lifecycle Mapping of Pilot 2: The GARMI Vision Module - A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
III. Vulnerability Assessment
IV. Network Security
V. Privacy Preservation
VII. Cyber Security Operations Automation
VIII. Ethical LLMs
IX. Challenges and Open Problems
References - Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
- Abstract
Main
Results
Methods in clinical AI fairness research
Discussion
Conclusion
Methods
Reference
Additional material - The ethical situation of DALL-E 2 / 2405.19176 / ISBN:https://doi.org/10.48550/arXiv.2405.19176 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Understanding what can DALL-E 2 actually do
4 Following the RRI, (Responsible research innovation) principles
References - The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Anticipated AI Use for Children
3. Discussion
Bibliography - Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Method
5. Interview Results: Opportunities and Concerns of Using LLMs in the Frontline
6. Discussion
A. Appendix - There and Back Again: The AI Alignment Paradox / 2405.20806 / ISBN:https://doi.org/10.48550/arXiv.2405.20806 / Published by ArXiv / on (web) Publishing site
- Paper
- Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Mitigating (Unfair) Bias
3 Secure AI in EO: Focusing on Defense Mechanisms, Uncertainty Modeling and Explainability
4 Geo-Privacy and Privacy-preserving Measures
5 Maintaining Scientific Excellence, Open Data, and Guiding AI Usage Based on Ethical Principles in EO
7 Responsible AI Integration in Business Innovation and Sustainability
8 Conclusions, Remarks and Future Directions - Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Framework
4 Discussion
5 Final Remarks
References
A DVC Dataset: Domestic Violence Cases
B PAC Dataset: Parental Alienation Cases
C Biases - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background, Foundational Studies, and Discussion:
3 Experimental Design, Overview, and Discussion
4 Comparative Analysis of Pre-Trained Models.
5 Discussion and further research
References - How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- Introduction
I. Description of Method/Empirical Design
II. Risk Characteristics of LLMs
IV. Impact of Alignments on Corporate Investment Forecasts
V. Robustness: Transcript Readability and Investment Score Predictability
VI. Conclusions
Figures and tables - Evaluating AI fairness in credit scoring with the BRIO tool / 2406.03292 / ISBN:https://doi.org/10.48550/arXiv.2406.03292 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Fairness violation analysis in BRIO
7 Revenue analysis
8 Conclusions - Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
- 4. Desiderata
6. Discussion
References
Appendix A. Terminology - MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
4 Experiments
Appendix - Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Bias Evaluation
5. Results
6. Discussion
7. Conclusion
References
Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models - Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective / 2406.05724 / ISBN:https://doi.org/10.48550/arXiv.2406.05724 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theories and Components of Deception
3 Reductionism & Previous Research in Deceptive AI
4 DAMAS: A MAS Framework for Deception Analysis - The Impact of AI on Academic Research and Publishing / 2406.06009 / Published by ArXiv / on (web) Publishing site
- Introduction
Ethics of AI for Writing Papers
References - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Theoretical Background
3 Methodology
4 Findings
6 Conclusions and Recommendations
References
Appendix A: Interview questions
Appendix B: Collected data summary - The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Towards Ethical Mitigation: A Proposed Methodology - Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Global Regulatory Landscape of AI
5 Generative AI: The New Frontier
References
A Supplemental Tables - Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Fairness and AI
3 Assuring fairness across the AI lifecycle
4 Assuring AI fairness in healthcare
5 Conclusion - Some things never change: how far generative AI can really change software engineering practice / 2406.09725 / ISBN:https://doi.org/10.48550/arXiv.2406.09725 / Published by ArXiv / on (web) Publishing site
- 2 Background and related work
3 Methodology
4 Results
5 Limitations
6 Conclusions and future work - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Foundations and Integration of SI and LLM
III. Federated LLMs for Smarm Intelligence
IV. Learned Lessons and Open Challenges
V. Conclusion
References - Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations / 2406.10632 / ISBN:https://doi.org/10.48550/arXiv.2406.10632 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Selection of application
III. Analysis
IV. Conclusion
References
Appendix B Legal aspects
Appendix C Algorithmic / technical aspects - Justice in Healthcare Artificial Intelligence in Africa / 2406.10653 / ISBN:https://doi.org/10.48550/arXiv.2406.10653 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
1. Beyond Bias and Fairness
3. Ensuring Equitable Access to AI Technologies
6. Ensuring Sustainable AI Development
7. Addressing Bias and Enforcing Fairness - Conversational Agents as Catalysts for Critical Thinking: Challenging Design Fixation in Group Design / 2406.11125 / ISBN:https://doi.org/10.48550/arXiv.2406.11125 / Published by ArXiv / on (web) Publishing site
- Abstract
1 INTRODUCTION
2 BEYOND RECOMMENDATIONS: ENHANCING CRITICAL THINKING WITH GENERATIVE AI
3 CHALLENGES AND OPPORTUNITIES OF USING CONVERSATIONAL AGENTS IN GROUP DESIGN
5 BALANCING CRITICAL THINKING WITH DESIGNER SATISFACTION AND MOTIVATION
6 POTENTIAL DESIGN CONSIDERATIONS
7 CONCLUSION
REFERENCES - Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- 2 Large Language Model Risks
3 Strategies in Securing Large Language models
4 Challenges in Implementing Guardrails
References - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health / 2406.13659 / ISBN:https://doi.org/10.48550/arXiv.2406.13659 / Published by ArXiv / on (web) Publishing site
- I. INTRODUCTION
II. RECENT ADVANCEMENTS IN LARGE LANGUAGE MODELS
III. CASE STUDIES : APPLICATIONS OF LLM S IN PATIENT ENGAGEMENT
V. CONCLUSION
REFERENCES - Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
- 3 METHODOLOGY AND STUDY DESIGN
4 RESULTS - AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Background
3 Limitations of RLxF
4 The Internal Tensions and Ethical Issues in RLxF
5 Rebooting Safety and Alignment: Integrating AI Ethics and System Safety
References - A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics / 2406.18812 / ISBN:https://doi.org/10.48550/arXiv.2406.18812 / Published by ArXiv / on (web) Publishing site
- II. BACKGROUND
III. ATTACKS ON DT-INTEGRATED AI ROBOTS - Staying vigilant in the Age of AI: From content generation to content authentication / 2407.00922 / ISBN:https://doi.org/10.48550/arXiv.2407.00922 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Art Practice: Human Reactions to Synthetic Fake Content
Prospective Usage: Assessing Veracity in Everyday Content
Acknowledgements - SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest / 2407.01110 / ISBN:https://doi.org/10.48550/arXiv.2407.01110 / Published by ArXiv / on (web) Publishing site
- I.
INTRODUCTION
II. UNDERSTANDING GENAI SECURITY
III. CRITICAL ANALYSIS
IV. SECGENAI FRAMEWORK REQUIREMENTS SPECIFICATIONS
REFERENCES - Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
- 2. Artificial intelligence as Weberian rationalization
4. AI-driven tax policy to reduce economic inequality: a thought experiment
5. Freedom, equality, and self-determination in the iron cage - A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Why audit generative AI systems?
3 How to audit generative AI systems?
5 Model audits
6 Application audits
7 Clarifications and limitations
8 Conclusion - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry / 2407.05339 / ISBN:https://doi.org/10.48550/arXiv.2407.05339 / Published by ArXiv / on (web) Publishing site
- 1 Introduction | The need for corporate AI governance
3 Practical implementation challenges | What to be prepared for? - Operationalising AI governance through ethics-based auditing: An industry case study / 2407.06232 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. AstraZeneca and AI governance
4. An ‘ethics-based’ AI audit
5. Methodology: An industry case study
7. Limitations
APPENDIX 1 - Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 2 The evolution of auditing as a governance mechanism
3 The need to audit AI systems – a confluence of top-down and bottom-up pressures
5 In this topical collection
6 Concluding remarks
References - Why should we ever automate moral decision making? / 2407.07671 / ISBN:https://doi.org/10.48550/arXiv.2407.07671 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Reasons for automated moral decision making - Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation / 2402.12590 / ISBN:https://doi.org/10.48550/arXiv.2402.12590 / Published by ArXiv / on (web) Publishing site
- D. Results for Claude 3
- Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review / 2311.14381 / ISBN:https://doi.org/10.48550/arXiv.2311.14381 / Published by ArXiv / on (web) Publishing site
- REFERENCES
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- REFERENCES:
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- A Details of Datasets
B Details of Instructions - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models / 2404.08699 / ISBN:https://doi.org/10.48550/arXiv.2404.08699 / Published by ArXiv / on (web) Publishing site
- E GPT Scoring Prompt
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Global Divide in AI Regulation: Horizontally. Context-Specific
III. Striking a Balance Betweeen the Two Approaches
IV. Proposing an Alternative 3C Framework
V. Conclusion - CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics / 2407.02885 / ISBN:https://doi.org/10.48550/arXiv.2407.02885 / Published by ArXiv / on (web) Publishing site
- 4 Design Framework
5 Case Studies
6 Discussion
References - Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background and Definitions
III. Method
IV. Evolution of Affective Robots for Well-Being
V. 10 Years of Affectivbe Robotics
VI. Future Opportunities in Affective Robotivs for Well-Being
References - With Great Power Comes Great Responsibility: The Role of Software Engineers / 2407.08823 / ISBN:https://doi.org/10.48550/arXiv.2407.08823 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 Future Research Challenges - Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks / 2407.09573 / ISBN:https://doi.org/10.48550/arXiv.2407.09573 / Published by ArXiv / on (web) Publishing site
- 2 Literature Review
4 Data Analysis and Results
5 Discussion
References - Generative AI for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations / 2407.11054 / ISBN:https://doi.org/10.48550/arXiv.2407.11054 / Published by ArXiv / on (web) Publishing site
- Introduction
A brief history of AI and generative AI
Applications of generative AI in literature reviews and evidence synthesis
Applications of generative AI to real-world evidence (RWE):
Applications of generative AI to health economic modeling
Limitations of generative AI in HTA applications
Glossary
Appendices
References - Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / on (web) Publishing site
- 2 Study Methodology: Narrative Review
4 Generative AI and Humans: Risks and Mitigation - Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models / 2407.13059 / ISBN:https://doi.org/10.48550/arXiv.2407.13059 / Published by ArXiv / on (web) Publishing site
- Introduction
References - Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges / 2407.13926 / ISBN:https://doi.org/10.48550/arXiv.2407.13926 / Published by ArXiv / on (web) Publishing site
- 3. AI Institutes and Society
- Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 1 Introduction: Assurance for Traditional Systems
3 Assurance of AI Systems for Specific Functions
4 Assurance for General-Purpose AI
5 Assurance and Alignment for AGI
6 Summary and Conclusion
References - Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
- Abstract
2. Key Challenges of Artificial Data
3. OAK Dataset
4. Automatic Prompt Generation
References
Appendices - Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Threat Model for Honest Computing
4. Discussion
References - RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Background
III. Methodology
V. Benchmarking with Chat GPT4 Default Interface
References - Nudging Using Autonomous Agents: Risks and Ethical Considerations / 2407.16362 / ISBN:https://doi.org/10.48550/arXiv.2407.16362 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Technology Mediated Nudging
3 Examples of Biases
5 Principles for the Nudge Lifecycle - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Theoretical Lens: Expanding Views on Algorithmic Risks and Harms
3 Methods: Snowball and Structured Search
4 Mapping Individual, Social, and Biospheric Impacts of Foundation Models
5 Discussion: Grappling with the Scale and Interconnectedness of Foundation Models
References
A Appendix - Navigating the United States Legislative Landscape on Voice Privacy: Existing Laws, Proposed Bills, Protection for Children, and Synthetic Data for AI / 2407.19677 / ISBN:https://doi.org/10.48550/arXiv.2407.19677 / Published by ArXiv / on (web) Publishing site
- 3. Children’s Privacy in the US
4. State-level Privacy Regulations in the US
5. Regulations on Synthetic Data for AI
8. References - Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Methodology
4 Findings
6 Discussion and Future Work
References
A Example Storyboards - Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Deepfake Detection
3. Deepfake Attribition and Recognition
5. Deepfakes Detection Method on Realistic Scenarios
6. Active Authentication
References - Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background and Literature Review
3 Methodology
4 ESG-AI framework
6 Conclusion
References - AI for All: Identifying AI incidents Related to Diversity and Inclusion / 2408.01438 / ISBN:https://doi.org/10.48550/arXiv.2408.01438 / Published by ArXiv / on (web) Publishing site
- 4 Results
6 Threats to Validity - Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Work
4 Large-Scale Surveys of AI in the Literature
5 Discussion
References
A Known Limitations of Surveys
B Additional Materials for Pilot Survey
C Additional Materials for the Systematic Literature Review - Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. Related Work
3. Proposed framework
4. Model architecture and training parameters
5. Model Training
6. Results
7. Conclusion and Future Directions
References - AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
- II. Related Work
III. Methodology - Criticizing Ethics According to Artificial Intelligence / 2408.04609 / ISBN:https://doi.org/10.48550/arXiv.2408.04609 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Preliminary notes
2 Clarifying conceptual ambiguities
3 Critical Reflection on AI Risks
4 Exploring epistemic challenges
5 Investigating fundamental normative issues - Between Copyright and Computer Science: The Law and Ethics of Generative AI / 2403.14653 / ISBN:https://doi.org/10.48550/arXiv.2403.14653 / Published by ArXiv / on (web) Publishing site
- Introduction
I. The Why and How Behind LLMs
II. The Difference Between Academic and Commercial Research
III. A Guide for Data in LLM Research
IV. The Path Ahead - The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Methodology & Guidelines
3 Data Sources
4 Data Preparation
6 Model Training
7 Environmental Impact
8 Model Evaluation
9 Model Release & Monitoring
References - Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Generative AI
III. Language Modeling
IV. Challenges of Generative AI and LLMs
References - VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
- Abstract
I Introduction
3 Method
References
Appendices - Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 The Numbers of the Future
3 Uncertainty Ex Machina
4 Conclusions - Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration / 2408.07483 / ISBN:https://doi.org/10.48550/arXiv.2408.07483 / Published by ArXiv / on (web) Publishing site
- 3 Visualization Atlas Design Patterns
4 Interviews with Visualization Atlas Creators
6 Key Characteristics of Visualization Atlases
7 Discussion - Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Neuro-Symbolic AI
III. Autonomy in Military Weapons Systems
V. Challenges and Risks
VI. Interpretability and Explainability
VII. Conclusion
References - Conference Submission and Review Policies to Foster Responsible Computing Research / 2408.09678 / ISBN:https://doi.org/10.48550/arXiv.2408.09678 / Published by ArXiv / on (web) Publishing site
- Introduction
Obtaining Consent - Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
1. What is AI
2. Designating AI as an Arbitrator is Consistent with FAA
3. Practical and Strategic Benefits of Using AI in Arbitration
1. Resistance Against AI Does Not Offer Conclusive Reasons for Outright Rejection
2. Let AI Grow Under Favorable Conditions: Avoiding Overly Moralistic Views
3. Arbitration Should Allow Flexible, Contract-Based Experimentation in a Fast- Evolving Regulatory Landscape - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Background and Related Works
3. Methodology
4. Experiment Results
6. Conclusion
References - The Problems with Proxies: Making Data Work Visible through Requester Practices / 2408.11667 / ISBN:https://doi.org/10.48550/arXiv.2408.11667 / Published by ArXiv / on (web) Publishing site
- Abstract
Related Work
Methods
Findings
Discussion
References
Appendix - Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Promises
3 Challenges
References
Tables - Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Operationalizable minimum requirements
5 Overall Ethical Requirements (O)
6 Fairness (F)
7 Privacy and Data Protection (P)
8 Safety and Robustness (SR)
9 Sustainability (SU)
10 Transparency and Explainability (T)
11 Truthfulness (TR) - Dataset | Mindset = Explainable AI | Interpretable AI / 2408.12420 / ISBN:https://doi.org/10.48550/arXiv.2408.12420 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
3. Database and Experimental Setup
5. Results Discussion - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Related Work
III. Generative AI
IV. Attack Methodology - Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Preliminaries
3 Multimodal Medical Studies
4 Contrastice Foundation Models (CFMs)
5 Multimodal LLMs (MLLMs)
6 Discussions of Current Studies
7 Challenges and Future Directions
References
Appendix - Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Related Work
3 Methodology
4 Background
5 Explanation Requirements and Legal Explanatory Goals
7 Case Studies: Closed-Loop and Semi-Closed-Loop Control
9 Threats to Validity - What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- “Fine cuts” of Empathy: Capabilities and
Distinctions under the Empathy Umbrella
Implications for AI Creators and Users
Conclusion - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Trustworthy and Responsible AI Definition
4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications
6 Open Challenges
7 Guidelines and Recommendations
8 Conclusion and Final Remarks
References - A Survey for Large Language Models in Biomedicine / 2409.00133 / ISBN:https://doi.org/10.48550/arXiv.2409.00133 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Background
3 LLMs in Zero-Shot Biomedical Applications
4 Adapting General LLMs to the Biomedical Field
5 Discussion
6 Conclusion
References - Digital Homunculi: Reimagining Democracy Research with Generative Agents / 2409.00826 / ISBN:https://doi.org/10.48550/arXiv.2409.00826 / Published by ArXiv / on (web) Publishing site
- Abstract
1. Introduction
2. The Experimentation Bottleneck
3. How GenAI Could Make a Difference
4. Risks and Caveats
5. Annoyances or Dealbreakers? - The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
- Mapping ethical challenges in complexity science
- AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities / 2409.02017 / ISBN:https://doi.org/10.48550/arXiv.2409.02017 / Published by ArXiv / on (web) Publishing site
- Background
Methods
Results - Preliminary Insights on Industry Practices for Addressing Fairness Debt / 2409.02432 / ISBN:https://doi.org/10.48550/arXiv.2409.02432 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
3 Method - DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Prior Benchmarks
3 Data Details
4 LLM Services (Infrastructure)
5 Prompting
7 Limitations
9 Conclusion & Future Work
References
10 Appendix - Exploring AI Futures Through Fictional News Articles / 2409.06354 / ISBN:https://doi.org/10.48550/arXiv.2409.06354 / Published by ArXiv / on (web) Publishing site
- Reflections from two workshop participants
Discussion and conclusion - Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- References
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- B Cheatsheet Samples
- Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- References
- The overlooked need for Ethics in Complexity Science: Why it matters / 2409.02002 / ISBN:https://doi.org/10.48550/arXiv.2409.02002 / Published by ArXiv / on (web) Publishing site
- Annexus
- On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 A Creative Journey from Ada Lovelace to Foundation Models
3 Large Language Models and Boden’s Three Criteria
4 Easy and Hard Problems in Machine Creativity
5 Practical Implications
6 Conclusion
References - Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
- Introduction
Part I Modelling - Machine learning, computer vision and processing 1 Machine learning and computer vision for Earth observation
Part II Understanding - Physics-machine learning interplay, causality and ontologies 3 Knowledge-based AI and Earth observation
5 Physics-aware machine learning
Part III Communicating - Machine-user interaction, trustworthiness & ethics 6 User-centric Earth observation
7 Earth observation and society: the growing relevance of ethics
Conclusions
References - LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Dataset
4 Hate Classifier Model
5 Retrieval-Augmented Generation
9 Limitations
10 Ethical Considerations
References
A Reproducibility
C Verified Articles
E Used Prompts - Why business adoption of quantum and AI technology must be ethical / 2312.10081 / ISBN:https://doi.org/10.48550/arXiv.2312.10081 / Published by ArXiv / on (web) Publishing site
- Argument from a holistic and humanistic perspective
Argument by analogy: The case of sustainability - Views on AI aren't binary -- they're plural / 2312.14230 / ISBN:https://doi.org/10.48550/arXiv.2312.14230 / Published by ArXiv / on (web) Publishing site
- Abstract
The false binary: A note on language
Conclusion
Glossary
References - Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
- 2 Foundation Models
3 Foundation Models in Healthcare
4 Multi-Modal Data Fusion
5 Data Quantity
6 Data Annotation
7 Data Privacy
8 Performance Evaluation
9 Challenges and Opportunities
References
A Healthcare Data Modalities - Ethical Artificial Intelligence Principles and Guidelines for the Governance and Utilization of Highly Advanced Large Language Models / 2401.10745 / ISBN:https://doi.org/10.48550/arXiv.2401.10745 / Published by ArXiv / on (web) Publishing site
- Background
Comprehending Advanced Large Language Models and its Capabilities - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models / 2401.16727 / ISBN:https://doi.org/10.48550/arXiv.2401.16727 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Hate Speech
3 Methodology
4 Challenges
5 Future Directions
6 Conclusion
References - Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Methodology
3. Results
4. Discussion
5. Conclusion - Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Abstract
Language models as human participants
Six fallacies that misinterpret language models
Using language models to simulate roles and model cognitive processes
Concluding remarks - Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Conceptualization and frameworks
IV. Findings and Resultant Themes
V. Discussion
References - How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions / 2409.07192 / ISBN:https://doi.org/10.48550/arXiv.2409.07192 / Published by ArXiv / on (web) Publishing site
- 3 Research Design
4 Results
5 Open Challenges and Future Research Directions (RQ5)
6 Discussions
References - Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection / 2409.08895 / ISBN:https://doi.org/10.48550/arXiv.2409.08895 / Published by ArXiv / on (web) Publishing site
- Introduction
1 Related Work
2 Methodology
5 Discussion
6 Conclusion - Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Results
References - ValueCompass: A Framework of Fundamental Values for Human-AI Alignment / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Designing ValueCompass: A Comprehensive Framework for Defining Fundamental Values in Alignment
4 Operationalizing ValueCompass: Methods to Measure Value Alignment of Humans and AI
5 Findings with ValueCompass: The Status Quo of Human-AI Value Alignment
6 Discussion
7 Conclusion
References - Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools / 2409.11489 / ISBN:https://doi.org/10.48550/arXiv.2409.11489 / Published by ArXiv / on (web) Publishing site
- 2 Ethical Considerations in AI-Enabled Optimization
3 Case Studies in AI-Enabled Optimization - Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 2 Related Research
3 Method
5 Discussion
6 Conclusion
References
Appendices - Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations / 2409.13869 / ISBN:https://doi.org/10.48550/arXiv.2409.13869 / Published by ArXiv / on (web) Publishing site
- How Does AI See Humans in their Occupations?
Data and Results
Middle-aged and elders’ representation - GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / on (web) Publishing site
- 2 Background and Related Work
3 Chatbot Ad Engine Design
5 User Study Methodology
6 User Study Results
7 Discussion
References
A Appendix - XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Related Works
3 XTRUST Construction
4 Experiments
5 Conclusion
References
Appendices - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
- 2 Views on Intelligence
3 Origins and the Path leading to AHI
4 Brain-inspired Information processing
5 Challenges and Perspectives in Human-Level AI Development
6 Final Thoughts and Discussions
References - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
2. Literature Review
3. Methodology
4. Framework Development
6. Conclusion - Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions / 2409.16974 / ISBN:https://doi.org/10.48550/arXiv.2409.16974 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
5 Aims & Objectives (RQ1)
6 Methodologies & Capabilities (RQ2)
7 Limitations & Considerations (RQ3)
8 Discussion
References - Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Related Work
III. Current Platform Measures
IV. Methodology
V. Results
References - Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Inherent problems of AI related to medicine
4 AI safety issues related to large language models in medicine
References - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 3 Methods
References
B Service-ready Features and Identifiers - The Gradient of Health Data Privacy / 2410.00897 / ISBN:https://doi.org/10.48550/arXiv.2410.00897 / Published by ArXiv / on (web) Publishing site
- Abstract
2 Background and Related Work
3 The Health Data Privacy Gradient
4 Technical Implementation of a Privacy Gradient Model
5 Legal and Ethical Implications
7 Policy Implications and Recommendations
References - Enhancing transparency in AI-powered customer engagement / 2410.01809 / ISBN:https://doi.org/10.48550/arXiv.2410.01809 / Published by ArXiv / on (web) Publishing site
- Challenges to Achieving Transparency
References - Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / on (web) Publishing site
- Abstract
I. Introduction
II. Background
III. Research Methodology
IV. Results
V. Discussion
References
APPENDIX A SELECTED STUDIES
APPENDIX B DATA EXTRACTED FROM PRIMARY STUDIES
APPENDIX D ADDITIONAL INFORMATION - Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
- V. Proof of Concepts 2
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
4 Daily Dilemmas: Dataset Analysis
5 Model Preference and Steerability on Daily Dilemmas
References - Application of AI in Credit Risk Scoring for Small Business Loans: A case study on how AI-based random forest model improves a Delphi model outcome in the case of Azerbaijani SMEs / 2410.05330 / ISBN:https://doi.org/10.48550/arXiv.2410.05330 / Published by ArXiv / on (web) Publishing site
- Abstract
Introduction
Discussion
References - AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
References - DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- Appendices