_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology - human, AI, scraping readers welcome
for updates on publications, follow: on Instagram, Twitter, Patreon, YouTube, Kaggle metadata


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: casper


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: casper

Bibliography items where occurs: 28
AI Alignment and Social Choice: Fundamental Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / Version released on 2023-10-24 / on (web) Publishing site


LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / Version released on 2023-11-04 / on (web) Publishing site


She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / Version released on 2023-12-15 / on (web) Publishing site


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / Version released on 2025-04-04 / on (web) Publishing site


The Necessity of AI Audit Standards Boards / 2404.13060 / ISBN:https://doi.org/10.48550/arXiv.2404.13060 / Published by ArXiv / Version released on 2024-04-11 / on (web) Publishing site


Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / Version released on 2024-06-04 / on (web) Publishing site


AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / Version released on 2024-06-26 / on (web) Publishing site


Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / Version released on 2025-12-05 / on (web) Publishing site


How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance / 2410.13042 / ISBN:https://doi.org/10.48550/arXiv.2410.13042 / Published by ArXiv / Version released on 2024-10-16 / on (web) Publishing site


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / Version released on 2025-11-25 / on (web) Publishing site


Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / Version released on 2024-12-23 / on (web) Publishing site


Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / Version released on 2025-01-16 / on (web) Publishing site


Prioritization First, Principles Second: An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv.2502.06059 / Published by ArXiv / Version released on 2025-10-14 / on (web) Publishing site


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site


Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / Version released on 2025-11-03 / on (web) Publishing site


DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / Version released on 2025-03-13 / on (web) Publishing site


Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents / 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / Version released on 2025-09-18 / on (web) Publishing site


Approaches to Responsible Governance of GenAI in Organizations / 2504.17044 / ISBN:https://doi.org/10.48550/arXiv.2504.17044 / Published by ArXiv / Version released on 2025-09-14 / on (web) Publishing site


Kaleidoscope Gallery: Exploring Ethics and Generative AI Through Art / 2505.14758 / ISBN:https://doi.org/10.48550/arXiv.2505.14758 / Published by ArXiv / Version released on 2025-05-20 / on (web) Publishing site


Mechanistic Interpretability Needs Philosophy / 2506.18852 / ISBN:https://doi.org/10.48550/arXiv.2506.18852 / Published by ArXiv / Version released on 2025-06-23 / on (web) Publishing site


AI Model Passport: Data and System Traceability Framework for Transparent AI in Health / 2506.22358 / ISBN:https://doi.org/10.48550/arXiv.2506.22358 / Published by ArXiv / Version released on 2025-06-27 / on (web) Publishing site


Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance / 2508.08789 / ISBN:https://doi.org/10.48550/arXiv.2508.08789 / Published by ArXiv / Version released on 2025-08-18 / on (web) Publishing site


The AI-Fraud Diamond: A Novel Lens for Auditing Algorithmic Deception / 2508.13984 / ISBN:https://doi.org/10.48550/arXiv.2508.13984 / Published by ArXiv / Version released on 2025-08-19 / on (web) Publishing site


Beyond Prediction: Reinforcement Learning as the Defining Leap in Healthcare AI / 2508.21101 / ISBN:https://doi.org/10.48550/arXiv.2508.21101 / Published by ArXiv / Version released on 2025-08-28 / on (web) Publishing site


The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs / 2506.11094 / ISBN:https://doi.org/10.48550/arXiv.2506.11094 / Published by ArXiv / Version released on 2025-10-30 / on (web) Publishing site


Understanding AI Trustworthiness: A Scoping Review of AIES & FAccT Articles / 2510.21293 / ISBN:https://doi.org/10.48550/arXiv.2510.21293 / Published by ArXiv / Version released on 2025-10-28 / on (web) Publishing site


Designing and Evaluating Malinowski's Lens: An AI-Native Educational Game for Ethnographic Learning / 2511.07682 / ISBN:https://doi.org/10.48550/arXiv.2511.07682 / Published by ArXiv / Version released on 2025-11-10 / on (web) Publishing site


The Decision Path to Control AI Risks Completely: Fundamental Control Mechanisms for AI Governance / 2512.04489 / ISBN:https://doi.org/10.48550/arXiv.2512.04489 / Version released on 2025-12-04 / on (web) Publishing site