if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: reliably
Bibliography items where occurs: 61
- What does it mean to be a responsible AI practitioner: An ontology of roles and skills / 2205.03946 / ISBN:https://doi.org/10.48550/arXiv.2205.03946 / Published by ArXiv / on (web) Publishing site
- 4 Proposed competency framework for responsible AI practitioners
- GPT detectors are biased against non-native English writers / 2304.02819 / ISBN:https://doi.org/10.48550/arXiv.2304.02819 / Published by ArXiv / on (web) Publishing site
- References
- Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects / 2304.08275 / ISBN:https://doi.org/10.48550/arXiv.2304.08275 / Published by ArXiv / on (web) Publishing site
- I. Introduction
II. Underlying Aspects - Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- 4 Previous operationalisation of ethical principles
- A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
- 7 Runtime Monitor
- Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- 3 Bias and fairness
- The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- B Pre-class Questionnaire (Verbatim)
- A Review of the Ethics of Artificial Intelligence and its Applications in the United States / 2310.05751 / ISBN:https://doi.org/10.48550/arXiv.2310.05751 / Published by ArXiv / on (web) Publishing site
- 3. AI Ethical Principles
- A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
- VI. IMPROVING FAIRNESS , ACCOUNTABILITY,
TRANSPARENCY, AND ETHICS
- Commercialized Generative AI: A Critical Study of the Feasibility and Ethics of Generating Native Advertising Using Large Language Models in Conversational Web Search / 2310.04892 / ISBN:https://doi.org/10.48550/arXiv.2310.04892 / Published by ArXiv / on (web) Publishing site
- Introduction
- Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
- 4 Reinforcement Learning with Good-for-Humanity Preference Models
6 Discussion - Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
- 2 Risks and Ethical Issues of Big Model
- Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control / 2311.08943 / ISBN:https://doi.org/10.48550/arXiv.2311.08943 / Published by ArXiv / on (web) Publishing site
- III. Safety
- Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- Abstract
- Responsible AI Considerations in Text Summarization Research: A Review of Current Practices / 2311.11103 / ISBN:https://doi.org/10.48550/arXiv.2311.11103 / Published by ArXiv / on (web) Publishing site
- 4 Findings
- Assessing AI Impact Assessments: A Classroom Study / 2311.11193 / ISBN:https://doi.org/10.48550/arXiv.2311.11193 / Published by ArXiv / on (web) Publishing site
- B Study Materials
- Large Language Models in Education: Vision and Opportunities / 2311.13160 / ISBN:https://doi.org/10.48550/arXiv.2311.13160 / Published by ArXiv / on (web) Publishing site
- VI. Challenges and future directions
- From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety / 2312.02078 / ISBN:https://doi.org/10.48550/arXiv.2312.02078 / Published by ArXiv / on (web) Publishing site
- Deployment and Setup
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- 2 Risks of Misuse for Artificial Intelligence in
Science
- The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / on (web) Publishing site
- References
- Designing Guiding Principles for NLP for Healthcare: A Case Study of Maternal Health / 2312.11803 / ISBN:https://doi.org/10.48550/arXiv.2312.11803 / Published by ArXiv / on (web) Publishing site
- C Full survey questions
- Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- AI Ethics Principles in Practice: Perspectives of Designers and Developers / 2112.07467 / ISBN:https://doi.org/10.48550/arXiv.2112.07467 / Published by ArXiv / on (web) Publishing site
- III. Methods
- Commercial AI, Conflict, and Moral Responsibility: A theoretical analysis and practical approach to the moral responsibilities associated with dual-use AI technology / 2402.01762 / ISBN:https://doi.org/10.48550/arXiv.2402.01762 / Published by ArXiv / on (web) Publishing site
- References
- (A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / on (web) Publishing site
- 4 Results
- POLARIS: A framework to guide the development of Trustworthy AI systems / 2402.05340 / ISBN:https://doi.org/10.48550/arXiv.2402.05340 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Multi-stakeholder Perspective on Responsible Artificial Intelligence and Acceptability in Education / 2402.15027 / ISBN:https://doi.org/10.48550/arXiv.2402.15027 / Published by ArXiv / on (web) Publishing site
- 2 Background
- Guidelines for Integrating Value Sensitive Design in Responsible AI Toolkits / 2403.00145 / ISBN:https://doi.org/10.48550/arXiv.2403.00145 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Generative AI in Higher Education: Seeing ChatGPT Through Universities' Policies, Resources, and Guidelines / 2312.05235 / ISBN:https://doi.org/10.48550/arXiv.2312.05235 / Published by ArXiv / on (web) Publishing site
- References
- Trust in AI: Progress, Challenges, and Future Directions / 2403.14680 / ISBN:https://doi.org/10.48550/arXiv.2403.14680 / Published by ArXiv / on (web) Publishing site
- 4. Discussion
- The Pursuit of Fairness in Artificial Intelligence Models A Survey / 2403.17333 / ISBN:https://doi.org/10.48550/arXiv.2403.17333 / Published by ArXiv / on (web) Publishing site
- 5 Ways to mitigate bias and promote Fairness
- Frontier AI Ethics: Anticipating and Evaluating the Societal Impacts of Language Model Agents / 2404.06750 / ISBN:https://arxiv.org/abs/2404.06750 / Published by ArXiv / on (web) Publishing site
- A Primer
Rebooting Machine Ethics - AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Designing Safe and Engaging AI Experiences for Children: Towards the Definition of Best Practices in UI/UX Design / 2404.14218 / ISBN:https://doi.org/10.48550/arXiv.2404.14218 / Published by ArXiv / on (web) Publishing site
- 4 Metrics for Assessing Trustworthiness, Reliability, and Safety in Human-AI
Interaction
- War Elephants: Rethinking Combat AI and Human Oversight / 2404.19573 / ISBN:https://doi.org/10.48550/arXiv.2404.19573 / Published by ArXiv / on (web) Publishing site
- 2 Background
4 Discussion - A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions / 2405.14487 / ISBN:https://doi.org/10.48550/arXiv.2405.14487 / Published by ArXiv / on (web) Publishing site
- IV. Network Security
- Using Large Language Models for Humanitarian Frontline Negotiation: Opportunities and Considerations / 2405.20195 / ISBN:https://doi.org/10.48550/arXiv.2405.20195 / Published by ArXiv / on (web) Publishing site
- 6. Discussion
- MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
- 5 Conclusion
- Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models / 2406.05602 / Published by ArXiv / on (web) Publishing site
- 7. Conclusion
- Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle / 2406.09029 / ISBN:https://doi.org/10.48550/arXiv.2406.09029 / Published by ArXiv / on (web) Publishing site
- 4 Assuring AI fairness in healthcare
- Current state of LLM Risks and AI Guardrails / 2406.12934 / ISBN:https://doi.org/10.48550/arXiv.2406.12934 / Published by ArXiv / on (web) Publishing site
- 2 Large Language Model Risks
- Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / on (web) Publishing site
- Abstract
3. Honest Computing reference specifications - AI-Driven Chatbot for Intrusion Detection in Edge Networks: Enhancing Cybersecurity with Ethical User Consent / 2408.04281 / ISBN:https://doi.org/10.48550/arXiv.2408.04281 / Published by ArXiv / on (web) Publishing site
- V. Results
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- References
- Neuro-Symbolic AI for Military Applications / 2408.09224 / ISBN:https://doi.org/10.48550/arXiv.2408.09224 / Published by ArXiv / on (web) Publishing site
- V. Challenges and Risks
- Promises and challenges of generative artificial intelligence for human learning / 2408.12143 / ISBN:https://doi.org/10.48550/arXiv.2408.12143 / Published by ArXiv / on (web) Publishing site
- 5 Conclusion and Future Directions
- Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 3 Multimodal Medical Studies
- What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users / 2408.15354 / ISBN:https://doi.org/10.48550/arXiv.2408.15354 / Published by ArXiv / on (web) Publishing site
- What Empathic Capabilities Do AIs Need?
- Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- 2 Trustworthy and Responsible AI Definition
- Data-Centric Foundation Models in Computational Healthcare: A Survey / 2401.02458 / ISBN:https://doi.org/10.48550/arXiv.2401.02458 / Published by ArXiv / on (web) Publishing site
- 10 Conclusions
- ValueCompass: A Framework of Fundamental Values for Human-AI Alignment / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- 3 Designing ValueCompass: A Comprehensive Framework for Defining Fundamental Values in Alignment
4 Operationalizing ValueCompass: Methods to Measure Value Alignment of Humans and AI - Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
- 2 Inherent problems of AI related to medicine
- Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
- V. Proof of Concepts 2
- When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI / 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
- References
- Is ETHICS about ethics? Evaluating the ETHICS benchmark / 2410.13009 / ISBN:https://doi.org/10.48550/arXiv.2410.13009 / Published by ArXiv / on (web) Publishing site
- 3 Misunderstanding the nature of general moral theories
- Ethical AI in Retail: Consumer Privacy and Fairness / 2410.15369 / ISBN:https://doi.org/10.48550/arXiv.2410.15369 / Published by ArXiv / on (web) Publishing site
- 2.0 Literature Review
- Trustworthy XAI and Application / 2410.17139 / ISBN:https://doi.org/10.48550/arXiv.2410.17139 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence / 2410.21296 / ISBN:https://doi.org/10.48550/arXiv.2410.21296 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Assessing the Current State of Self-Awareness in Artificial Intelligent Systems
References - Web Scraping for Research: Legal, Ethical, Institutional, and Scientific Considerations / 2410.23432 / ISBN:https://doi.org/10.48550/arXiv.2410.23432 / Published by ArXiv / on (web) Publishing site
- Appendices
- Where Assessment Validation and Responsible AI Meet / 2411.02577 / ISBN:https://doi.org/10.48550/arXiv.2411.02577 / Published by ArXiv / on (web) Publishing site
- Integrating Classical Validation Theory and Responsible AI