if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: chosen
Bibliography items where occurs: 124
- AI Ethics Issues in Real World: Evidence from AI Incident Database / 2206.07635 / ISBN:https://doi.org/10.48550/arXiv.2206.07635 / Published by ArXiv / on (web) Publishing site
- 3 Method
- What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
- 4 Proposed competency framework for responsible AI practitioners
- QB4AIRA: A Question Bank for AI Risk Assessment / 2305.09300 / ISBN:https://doi.org/10.48550/arXiv.2305.09300 / Published by ArXiv / on (web) Publishing site
- 2 The Question Bank: QB4AIRA
- Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act / 2308.02041 / ISBN:https://doi.org/10.48550/arXiv.2308.02041 / Published by ArXiv / on (web) Publishing site
- 2 Clarifying Terminologies of Article-5: Insights from Behavioral Economics and Psychology
- Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- 3 Taxonomy of ethical principles
A Methodology - A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 3 Vulnerabilities, Attack, and Limitations
- Getting pwn'd by AI: Penetration Testing with Large Language Models / 2308.00121 / ISBN:https://doi.org/10.48550/arXiv.2308.00121 / Published by ArXiv / on (web) Publishing site
- 3 LLM-based penetration testing
- Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- 2 Related Work on Data Excellence
3 Reliability and Reproducibility Metrics for Responsible Data Collection
5 Results - Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- 2 Black box and lack of transparency
- FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging / 2109.09658 / ISBN:https://doi.org/10.48550/arXiv.2109.09658 / Published by ArXiv / on (web) Publishing site
- 3. Universality - For Standardised AI in Medical Imaging
- EALM: Introducing Multidimensional Ethical Alignment in
Conversational Information Retrieval / 2310.00970 / ISBN:https://doi.org/10.48550/arXiv.2310.00970 / Published by ArXiv / on (web) Publishing site
- 3 Dataset Construction
- If our aim is to build morality into an artificial agent, how might we begin to go about doing so? / 2310.08295 / ISBN:https://doi.org/10.48550/arXiv.2310.08295 / Published by ArXiv / on (web) Publishing site
- 3 Proposing a Hybrid Approach
- STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
- 3 The applications of STREAM
- Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 4. Traffic Flow prediction in Autonomous vehicles
- The Glamorisation of Unpaid Labour: AI and its Influencers / 2308.02399 / ISBN:https://doi.org/10.48550/arXiv.2308.02399 / Published by ArXiv / on (web) Publishing site
- 2 Harms of Influencer Marketing
- AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR / 2305.01088 / ISBN:https://doi.org/10.48550/arXiv.2305.01088 / Published by ArXiv / on (web) Publishing site
- 7. AI-powered content creation and curation
- Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- 3 Agent Benchmark
Appendix A Data Details - Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- 2 AI feedback on specific problematic AI traits
- The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception
and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
- Moral Responsibility for AI Systems / 2310.18040 / ISBN:https://doi.org/10.48550/arXiv.2310.18040 / Published by ArXiv / on (web) Publishing site
- 3 The BvH and HK Definitions
4 The Causal Condition - AI for Open Science: A Multi-Agent Perspective for
Ethically Translating Data to Knowledge / 2310.18852 / ISBN:https://doi.org/10.48550/arXiv.2310.18852 / Published by ArXiv / on (web) Publishing site
- 4 Optimizing an Openness Metric in AI for Science
- Towards Effective Paraphrasing for Information
Disguise / 2311.05018 / ISBN:https://doi.org/10.1007/978-3-031-28238-6_22 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 4 Deontological AI Alignment
- Towards ethical multimodal systems / 2304.13765 / ISBN:https://doi.org/10.48550/arXiv.2304.13765 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
3 Crafting an Ethical Dataset - Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Revolutionizing Customer Interactions: Insights and Challenges in Deploying ChatGPT and Generative Chatbots for FAQs / 2311.09976 / ISBN:https://doi.org/10.48550/arXiv.2311.09976 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. ChatGPT - GPT in Data Science: A Practical Exploration of Model Selection / 2311.11516 / ISBN:https://doi.org/10.48550/arXiv.2311.11516 / Published by ArXiv / on (web) Publishing site
- III. Approach: capturing and representing heuristics behind GPT's decision-making process
- Ethics and Responsible AI Deployment / 2311.14705 / ISBN:https://doi.org/10.48550/arXiv.2311.14705 / Published by ArXiv / on (web) Publishing site
- 4. Addressing bias, transparency, and accountability
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 4 Fairness and equity
- Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / on (web) Publishing site
- 6 Measuring intelligence
- Investigating Responsible AI for Scientific Research: An Empirical Study / 2312.09561 / ISBN:https://doi.org/10.48550/arXiv.2312.09561 / Published by ArXiv / on (web) Publishing site
- III. Research methodology
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- Abstract
- Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / on (web) Publishing site
- Appendix
- Culturally-Attuned Moral Machines: Implicit Learning of Human Value Systems by AI through Inverse Reinforcement Learning / 2312.17479 / ISBN:https://doi.org/10.48550/arXiv.2312.17479 / Published by ArXiv / on (web) Publishing site
- Methods
- Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 5. LLMs in social and cultural psychology
6. LLMs as research tools in psychology - MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework / 2401.01955 / ISBN:https://doi.org/10.48550/arXiv.2401.01955 / Published by ArXiv / on (web) Publishing site
- V. Evaluation
- AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- III. Methods
IV. Results
V. Discussion and suggestions
VI. Support mechanisms - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- Contents / List of figures / List of tables / Acronyms
1 Introduction
I Understanding bias - 2 Bias and moral framework in AI-based decision making
4 Fairness metrics landscape in machine learning
II Mitigating bias - 5 Fairness mitigation
6 FFTree: a flexible tree to mitigate multiple fairness criteria
III Accounting for bias - 7 Addressing fairness in the banking sector
9 Towards fairness through time - FAIR Enough How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? / 2401.11033 / ISBN:https://doi.org/10.48550/arXiv.2401.11033 / Published by ArXiv / on (web) Publishing site
- 4 Framework for FAIR Data Principles Integration in LLM Development
- A Scoping Study of Evaluation Practices for Responsible AI Tools: Steps Towards Effectiveness Evaluations / 2401.17486 / ISBN:https://doi.org/10.48550/arXiv.2401.17486 / Published by ArXiv / on (web) Publishing site
- 4 RAI tool evaluation practices
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 3 Methods: case-based expert deliberation
- POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 7 Conclusion
- Face Recognition: to Deploy or not to Deploy? A Framework for Assessing the Proportional Use of Face Recognition Systems in Real-World Scenarios / 2402.05731 / ISBN:https://doi.org/10.48550/arXiv.2402.05731 / Published by ArXiv / on (web) Publishing site
- 4. Proposed framework
5. The framework in practice - Ethics in AI through the Practitioner's View: A Grounded Theory Literature Review / 2206.09514 / ISBN:https://doi.org/10.48550/arXiv.2206.09514 / Published by ArXiv / on (web) Publishing site
- 4 Challenges, Threats and Limitations
- How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents / 2402.13184 / ISBN:https://doi.org/10.48550/arXiv.2402.13184 / Published by ArXiv / on (web) Publishing site
- 3 CosmoAgent Simulation Setting
4 CosmoAgent Architecture
B Secretary Agent Prompt - The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
- METRIC-framework for medical training data
Methods - FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability, Transparency, and Ethics in Multimodal Learning Analytics / 2402.19071 / ISBN:https://doi.org/10.48550/arXiv.2402.19071 / Published by ArXiv / on (web) Publishing site
- 3. Methods
- Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Towards an AI-Enhanced Cyber Threat Intelligence Processing Pipeline / 2403.03265 / ISBN:https://doi.org/10.48550/arXiv.2403.03265 / Published by ArXiv / on (web) Publishing site
- III. The AI-Enhanced CTI Processing Pipeline
- Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- 3. Data
- Evaluation Ethics of LLMs in Legal Domain / 2403.11152 / ISBN:https://doi.org/10.48550/arXiv.2403.11152 / Published by ArXiv / on (web) Publishing site
- 4 Experiment
7 Ethic Impact - Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 3. Findings
4. Discussion - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Data
Results - The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- A Appendix
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 5 Ways to mitigate bias and promote Fairness
- Is Your AI Truly Yours? Leveraging Blockchain for Copyrights, Provenance, and Lineage / 2404.06077 / ISBN:https://doi.org/10.48550/arXiv.2404.06077 / Published by ArXiv / on (web) Publishing site
- VI. Evaluation
- A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives / 2402.01662 / ISBN:https://doi.org/10.48550/arXiv.2402.01662 / Published by ArXiv / on (web) Publishing site
- 5 Discussion
- Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / on (web) Publishing site
- 6 Conclusions: Towards Humble Technical Practices
- On the role of ethics and sustainability in business innovation / 2404.07678 / ISBN:https://doi.org/10.48550/arXiv.2404.07678 / Published by ArXiv / on (web) Publishing site
- Sustainability considera5ons
- Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act / 2404.11476 / ISBN:https://doi.org/10.48550/arXiv.2404.11476 / Published by ArXiv / on (web) Publishing site
- 4 European Union Artificial Intelligence Act
- Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making / 2404.12558 / ISBN:https://doi.org/10.48550/arXiv.2404.12558 / Published by ArXiv / on (web) Publishing site
- 2 Method
- A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 4 Medicine and Healthcar
- Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect / 2401.09082 / ISBN:https://doi.org/10.48550/arXiv.2401.09082 / Published by ArXiv / on (web) Publishing site
- Evaluating a system as a social actor
- Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- 3. What Are the Collective Decision Problems
and their Alternatives in this Context?
- Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- 3 Quantitative Models of Emotions, Behaviors, and Ethics
Appendix S: Multiple Adversarial LLMs - Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems / 2405.13191 / ISBN:https://doi.org/10.48550/arXiv.2405.13191 / Published by ArXiv / on (web) Publishing site
- 4 Conducting the Pilots
- Gender Bias Detection in Court Decisions: A Brazilian Case Study / 2406.00393 / ISBN:https://doi.org/10.48550/arXiv.2406.00393 / Published by ArXiv / on (web) Publishing site
- 3 Framework
A DVC Dataset: Domestic Violence Cases - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models / 2406.00628 / ISBN:https://doi.org/10.48550/arXiv.2406.00628 / Published by ArXiv / on (web) Publishing site
- 3 Experimental Design, Overview, and Discussion
- How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
- II. Risk Characteristics of LLMs
- Promoting Fairness and Diversity in Speech Datasets for Mental Health and Neurological Disorders Research / 2406.04116 / ISBN:https://doi.org/10.48550/arXiv.2406.04116 / Published by ArXiv / on (web) Publishing site
- 4. Desiderata
- Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- 4. Methodology
- An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics / 2406.06400 / ISBN:https://doi.org/10.48550/arXiv.2406.06400 / Published by ArXiv / on (web) Publishing site
- 3 Methodology
- The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
- 4 Towards Ethical Mitigation: A Proposed Methodology
- Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
- 4 Assuring AI fairness in healthcare
- Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / on (web) Publishing site
- 3 METHODOLOGY AND STUDY DESIGN
- AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / on (web) Publishing site
- 3 Limitations of RLxF
- A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 7 Clarifications and limitations
- Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry / 2407.05339 / ISBN:https://doi.org/10.48550/arXiv.2407.05339 / Published by ArXiv / on (web) Publishing site
- 1 Introduction | The need for corporate AI governance
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? / 2308.15399 / ISBN:https://doi.org/10.48550/arXiv.2308.15399 / Published by ArXiv / on (web) Publishing site
- B Details of Instructions
- Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being / 2407.02957 / ISBN:https://doi.org/10.48550/arXiv.2407.02957 / Published by ArXiv / on (web) Publishing site
- III. Method
IV. Evolution of Affective Robots for Well-Being - Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 2 Assurance for Systems Extended with AI and ML
5 Assurance and Alignment for AGI - Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- 2. Threat Model for Honest Computing
- RogueGPT: dis-ethical tuning transforms ChatGPT4 into a Rogue AI in 158 Words / 2407.15009 / ISBN:https://doi.org/10.48550/arXiv.2407.15009 / Published by ArXiv / on (web) Publishing site
- III. Methodology
IV. Results - Nudging Using Autonomous Agents: Risks and Ethical Considerations / 2407.16362 / ISBN:https://doi.org/10.48550/arXiv.2407.16362 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
5 Principles for the Nudge Lifecycle - Mapping the individual, social, and biospheric impacts of Foundation Models / 2407.17129 / ISBN:https://doi.org/10.48550/arXiv.2407.17129 / Published by ArXiv / on (web) Publishing site
- A Appendix
- Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework / 2408.00965 / ISBN:https://doi.org/10.48550/arXiv.2408.00965 / Published by ArXiv / on (web) Publishing site
- 4 ESG-AI framework
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance / 2408.01458 / ISBN:https://doi.org/10.48550/arXiv.2408.01458 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / on (web) Publishing site
- 3. Proposed framework
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- 8 Model Evaluation
- Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- II. Generative AI
III. Language Modeling - VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- 2. Designating AI as an Arbitrator is Consistent with FAA
- CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- Catalog of General Ethical Requirements for AI Certification / 2408.12289 / ISBN:https://doi.org/10.48550/arXiv.2408.12289 / Published by ArXiv / on (web) Publishing site
- 10 Transparency and Explainability (T)
- Dataset | Mindset = Explainable AI | Interpretable AI / 2408.12420 / ISBN:https://doi.org/10.48550/arXiv.2408.12420 / Published by ArXiv / on (web) Publishing site
- 2. Literature Review
- Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks / 2408.12806 / ISBN:https://doi.org/10.48550/arXiv.2408.12806 / Published by ArXiv / on (web) Publishing site
- IV. Attack Methodology
- Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis / 2408.15121 / ISBN:https://doi.org/10.48550/arXiv.2408.15121 / Published by ArXiv / on (web) Publishing site
- 8 Instructions for Use & Discussion of Findings
- What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- Three Empathic AI Use Cases in Medicine
Implications for AI Creators and Users - DetoxBench: Benchmarking Large Language Models for Multitask Fraud & Abuse Detection / 2409.06072 / ISBN:https://doi.org/10.48550/arXiv.2409.06072 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's CubeĆ / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- D. CausalRating: A Tool To Rate Sentiments Analysis Systems for Bias
- LLM generated responses to mitigate the impact of hate speech / 2311.16905 / ISBN:https://doi.org/10.48550/arXiv.2311.16905 / Published by ArXiv / on (web) Publishing site
- A Reproducibility
C Verified Articles - Integrating Generative AI in Hackathons: Opportunities, Challenges, and Educational Implications / 2401.17434 / ISBN:https://doi.org/10.48550/arXiv.2401.17434 / Published by ArXiv / on (web) Publishing site
- 3. Results
- Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / on (web) Publishing site
- 3 Method
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- 3. Methodology
- Social Media Bot Policies: Evaluating Passive and Active Enforcement / 2409.18931 / ISBN:https://doi.org/10.48550/arXiv.2409.18931 / Published by ArXiv / on (web) Publishing site
- III. Current Platform Measures
- Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 3 Methods
- Application of AI in Credit Risk Scoring for Small Business Loans: A case study on how AI-based random forest model improves a Delphi model outcome in the case of Azerbaijani SMEs / 2410.05330 / ISBN:https://doi.org/10.48550/arXiv.2410.05330 / Published by ArXiv / on (web) Publishing site
- Methodology
- AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- D Prompts for Agents on Press Polishing Module
- Investigating Labeler Bias in Face Annotation for Machine Learning / 2301.09902 / ISBN:https://doi.org/10.48550/arXiv.2301.09902 / Published by ArXiv / on (web) Publishing site
- 3. Method
5. Discussion - The Design Space of in-IDE Human-AI Experience / 2410.08676 / ISBN:https://doi.org/10.48550/arXiv.2410.08676 / Published by ArXiv / on (web) Publishing site
- III. Method
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- A. Appendix
- Study on the Helpfulness of Explainable Artificial Intelligence / 2410.11896 / ISBN:https://doi.org/10.48550/arXiv.2410.11896 / Published by ArXiv / on (web) Publishing site
- 3 An objective Methodology for evaluating XAI
4 Survey Results
5 Discussion - Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- 4 Cultural safety dataset
5 Experimental setup
7 Cultural safeguarding - Ethical AI in Retail: Consumer Privacy and Fairness / 2410.15369 / ISBN:https://doi.org/10.48550/arXiv.2410.15369 / Published by ArXiv / on (web) Publishing site
- 3.0 Methodology
- Towards Automated Penetration Testing: Introducing LLM Benchmark, Analysis, and Improvements / 2410.17141 / ISBN:https://doi.org/10.48550/arXiv.2410.17141 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Demystifying Large Language Models for Medicine: A Primer / 2410.18856 / ISBN:https://doi.org/10.48550/arXiv.2410.18856 / Published by ArXiv / on (web) Publishing site
- Large Language Model Selection
- TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- Appendices
- Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
- 4 Study Design & Methodology
Appendices - Ethical Statistical Practice and Ethical AI / 2410.22475 / ISBN:https://doi.org/10.48550/arXiv.2410.22475 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
- Examining Human-AI Collaboration for Co-Writing Constructive Comments Online / 2411.03295 / ISBN:https://doi.org/10.48550/arXiv.2411.03295 / Published by ArXiv / on (web) Publishing site
- 3 Methods
4 Findings
5 Discussion - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI / 2411.04490 / ISBN:https://doi.org/10.48550/arXiv.2411.04490 / Published by ArXiv / on (web) Publishing site
- 1. Introduction
3. Method - I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- 3 Method: Semi-structured Interviews
- Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / on (web) Publishing site
- Appendices