_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology - human, AI, scraping readers welcome
for updates on publications, follow: on Instagram, Twitter, Patreon, YouTube, Kaggle metadata


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: rauh


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: rauh

Bibliography items where occurs: 15
The AI Index 2022 Annual Report / 2205.03468 / ISBN:https://doi.org/10.48550/arXiv.2205.03468 / Published by ArXiv / Version released on 2022-05-02 / on (web) Publishing site


Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / Version released on 2024-02-13 / on (web) Publishing site


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / Version released on 2024-09-03 / on (web) Publishing site


Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / Version released on 2024-12-09 / on (web) Publishing site


Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground / 2412.05130 / ISBN:https://doi.org/10.48550/arXiv.2412.05130 / Published by ArXiv / Version released on 2025-08-27 / on (web) Publishing site


Concerns and Values in Human-Robot Interactions: A Focus on Social Robotics / 2501.05628 / ISBN:https://doi.org/10.48550/arXiv.2501.05628 / Published by ArXiv / Version released on 2025-12-07 / on (web) Publishing site


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site


Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents / 2504.01029 / ISBN:https://doi.org/10.48550/arXiv.2504.01029 / Published by ArXiv / Version released on 2025-09-18 / on (web) Publishing site


Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site


When Large Language Models Meet Law: Dual-Lens Taxonomy, Technical Advances, and Ethical Governance / 2507.07748 / ISBN:https://doi.org/10.48550/arXiv.2507.07748 / Published by ArXiv / Version released on 2025-07-10 / on (web) Publishing site


Psychometric Personality Shaping Modulates Capabilities and Safety in Language Models / 2509.16332 / ISBN:https://doi.org/10.48550/arXiv.2509.16332 / Published by ArXiv / Version released on 2025-09-19 / on (web) Publishing site


Human-aligned AI Model Cards with Weighted Hierarchy Architecture / 2510.06989 / ISBN:https://doi.org/10.48550/arXiv.2510.06989 / Published by ArXiv / Version released on 2025-10-08 / on (web) Publishing site


Cultural Dimensions of Artificial Intelligence Adoption: Empirical Insights for Wave 1 from a Multinational Longitudinal Pilot Study / 2510.19743 / ISBN:https://doi.org/10.48550/arXiv.2510.19743 / Published by ArXiv / Version released on 2025-10-22 / on (web) Publishing site


Diverse Human Value Alignment for Large Language Models via Ethical Reasoning / 2511.00379 / ISBN:https://doi.org/10.48550/arXiv.2511.00379 / Published by ArXiv / Version released on 2025-11-01 / on (web) Publishing site


Morality in AI. A plea to embed morality in LLM architectures and frameworks / 2511.20689 / ISBN:https://doi.org/10.48550/arXiv.2511.20689 / Version released on 2025-11-21 / on (web) Publishing site