_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology - human, AI, scraping readers welcome
for updates on publications, follow: on Instagram, Twitter, Patreon, YouTube, Kaggle metadata


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: helpfulness


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: helpfulness

Bibliography items where occurs: 72
A multilevel framework for AI governance / 2307.03198 / ISBN:https://doi.org/10.48550/arXiv.2307.03198 / Published by ArXiv / Version released on 2023-07-13 / on (web) Publishing site


A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / Version released on 2023-08-27 / on (web) Publishing site


Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust / 2309.10318 / ISBN:https://doi.org/10.48550/arXiv.2309.10318 / Published by ArXiv / Version released on 2023-09-19 / on (web) Publishing site


Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / Version released on 2023-10-20 / on (web) Publishing site


The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills / 2310.15112 / ISBN:https://doi.org/10.48550/arXiv.2310.15112 / Published by ArXiv / Version released on 2023-10-23 / on (web) Publishing site


AI Alignment and Social Choice: Fundamental Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / Version released on 2023-10-24 / on (web) Publishing site


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / Version released on 2023-10-26 / on (web) Publishing site


Human participants in AI research: Ethics and transparency in practice / 2311.01254 / ISBN:https://doi.org/10.48550/arXiv.2311.01254 / Published by ArXiv / Version released on 2024-09-26 / on (web) Publishing site


Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / Version released on 2024-08-03 / on (web) Publishing site


Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / Version released on 2023-11-29 / on (web) Publishing site


She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models / 2310.18333 / ISBN:https://doi.org/10.48550/arXiv.2310.18333 / Published by ArXiv / Version released on 2023-12-15 / on (web) Publishing site


How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities / 2311.09447 / ISBN:https://doi.org/10.48550/arXiv.2311.09447 / Published by ArXiv / Version released on 2024-04-02 / on (web) Publishing site


Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / Version released on 2023-11-16 / on (web) Publishing site


Case Repositories: Towards Case-Based Reasoning for AI Alignment / 2311.10934 / ISBN:https://doi.org/10.48550/arXiv.2311.10934 / Published by ArXiv / Version released on 2023-11-26 / on (web) Publishing site


Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / Version released on 2023-12-11 / on (web) Publishing site


The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment / 2312.07086 / ISBN:https://doi.org/10.48550/arXiv.2312.07086 / Published by ArXiv / Version released on 2024-04-24 / on (web) Publishing site


Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy / 2312.14565 / ISBN:https://doi.org/10.48550/arXiv.2312.14565 / Published by ArXiv / Version released on 2024-08-16 / on (web) Publishing site


(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice / 2402.01864 / ISBN:https://doi.org/10.48550/arXiv.2402.01864 / Published by ArXiv / Version released on 2024-02-02 / on (web) Publishing site


Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence / 2402.09880 / ISBN:https://doi.org/10.48550/arXiv.2402.09880 / Published by ArXiv / Version released on 2024-10-14 / on (web) Publishing site


AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / Version released on 2025-04-04 / on (web) Publishing site


Debunking Robot Rights Metaphysically, Ethically, and Legally / 2404.10072 / ISBN:https://doi.org/10.48550/arXiv.2404.10072 / Published by ArXiv / Version released on 2024-04-15 / on (web) Publishing site


Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / Version released on 2024-04-19 / on (web) Publishing site


Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use / 2405.00995 / ISBN:https://doi.org/10.48550/arXiv.2405.00995 / Published by ArXiv / Version released on 2025-08-25 / on (web) Publishing site


How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / Version released on 2024-08-01 / on (web) Publishing site


Documenting Ethical Considerations in Open Source AI Models / 2406.18071 / ISBN:https://doi.org/10.48550/arXiv.2406.18071 / Published by ArXiv / Version released on 2024-07-03 / on (web) Publishing site


AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations / 2406.18346 / ISBN:https://doi.org/10.48550/arXiv.2406.18346 / Published by ArXiv / Version released on 2024-06-26 / on (web) Publishing site


Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias / 2407.11360 / ISBN:https://doi.org/10.48550/arXiv.2407.11360 / Published by ArXiv / Version released on 2024-07.16 / on (web) Publishing site


Interactive embodied evolution for socially adept Artificial General Creatures / 2407.21357 / ISBN:https://doi.org/10.48550/arXiv.2407.21357 / Published by ArXiv / Version released on 2024-07-31 / on (web) Publishing site


Exploring the Role of Social Support when Integrating Generative AI into Small Business Workflows / 2407.21404 / ISBN:https://doi.org/10.48550/arXiv.2407.21404 / Published by ArXiv / Version released on 2024-07-31 / on (web) Publishing site


Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity / 2408.04023 / ISBN:https://doi.org/10.48550/arXiv.2408.04023 / Published by ArXiv / Version released on 2024-08-07 / on (web) Publishing site


The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / Version released on 2024-09-03 / on (web) Publishing site


CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher / 2408.11650 / ISBN:https://doi.org/10.48550/arXiv.2408.11650 / Published by ArXiv / Version released on 2024-11-06 / on (web) Publishing site


Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / Version released on 2024-10-20 / on (web) Publishing site


ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / Version released on 2025-11-04 / on (web) Publishing site


GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / Version released on 2024-09-23 / on (web) Publishing site


Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / Version released on 2024-09-18 / on (web) Publishing site


DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / Version released on 2025-03-15 / on (web) Publishing site


From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events / 2306.00227 / ISBN:https://doi.org/10.48550/arXiv.2306.00227 / Published by ArXiv / Version released on 2024-10-25 / on (web) Publishing site


Study on the Helpfulness of Explainable Artificial Intelligence / 2410.11896 / ISBN:https://doi.org/10.48550/arXiv.2410.11896 / Published by ArXiv / Version released on 2024-10-14 / on (web) Publishing site


Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models / 2410.12880 / ISBN:https://doi.org/10.48550/arXiv.2410.12880 / Published by ArXiv / Version released on 2025-01-24 / on (web) Publishing site


Jailbreaking and Mitigation of Vulnerabilities in Large Language Models / 2410.15236 / ISBN:https://doi.org/10.48550/arXiv.2410.15236 / Published by ArXiv / Version released on 2025-11-25 / on (web) Publishing site


The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships / 2410.20130 / ISBN:https://doi.org/10.48550/arXiv.2410.20130 / Published by ArXiv / Version released on 2025-01-26 / on (web) Publishing site


Nteasee: A mixed methods study of expert and general population perspectives on deploying AI for health in African countries / 2409.12197 / ISBN:https://doi.org/10.48550/arXiv.2409.12197 / Published by ArXiv / Version released on 2024-11-11 / on (web) Publishing site


Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / Version released on 2024-12-19 / on (web) Publishing site


Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / Version released on 2025-01-16 / on (web) Publishing site


Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare / 2501.18632 / ISBN:https://doi.org/10.48550/arXiv.2501.18632 / Published by ArXiv / Version released on 2025-01-27 / on (web) Publishing site


Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / Version released on 2025-08-02 / on (web) Publishing site


Prioritization First, Principles Second: An Adaptive Interpretation of Helpful, Honest, and Harmless Principles / 2502.06059 / ISBN:https://doi.org/10.48550/arXiv.2502.06059 / Published by ArXiv / Version released on 2025-10-14 / on (web) Publishing site


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site


Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interactions / 2504.15236 / ISBN:https://doi.org/10.48550/arXiv.2504.15236 / Published by ArXiv / Version released on 2025-04-21 / on (web) Publishing site


Ethics and Persuasion in Reinforcement Learning from Human Feedback: A Procedural Rhetorical Approach / 2505.09576 / ISBN:https://doi.org/10.48550/arXiv.2505.09576 / Published by ArXiv / Version released on 2025-05-14 / on (web) Publishing site


Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / Version released on 2025-05-15 / on (web) Publishing site


Formalising Human-in-the-Loop: Computational Reductions, Failure Modes, and Legal-Moral Responsibility / 2505.10426 / ISBN:https://doi.org/10.48550/arXiv.2505.10426 / Published by ArXiv / Version released on 2025-09-25 / on (web) Publishing site


Let's have a chat with the EU AI Act / 2505.11946 / ISBN:https://doi.org/10.48550/arXiv.2505.11946 / Published by ArXiv / Version released on 2025-05-17 / on (web) Publishing site


AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals / 2505.15365 / ISBN:https://doi.org/10.48550/arXiv.2505.15365 / Published by ArXiv / Version released on 2025-05-21 / on (web) Publishing site


Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods / 2505.17870 / ISBN:https://doi.org/10.48550/arXiv.2505.17870 / Published by ArXiv / Version released on 2025-05-23 / on (web) Publishing site


Wide Reflective Equilibrium in LLM Alignment: Bridging Moral Epistemology and AI Safety / 2506.00415 / ISBN:https://doi.org/10.48550/arXiv.2506.00415 / Published by ArXiv / Version released on 2025-05-31 / on (web) Publishing site


Feeling Machines: Ethics, Culture, and the Rise of Emotional AI / 2506.12437 / ISBN:https://doi.org/10.48550/arXiv.2506.12437 / Published by ArXiv / Version released on 2025-06-14 / on (web) Publishing site


The Evolving Role of Large Language Models in Scientific Innovation: Evaluator, Collaborator, and Scientist / 2507.11810 / ISBN:https://doi.org/10.48550/arXiv.2507.11810 / Published by ArXiv / Version released on 2025-07-16 / on (web) Publishing site


Exploiting Jailbreaking Vulnerabilities in Generative AI to Bypass Ethical Safeguards for Facilitating Phishing Attacks / 2507.12185 / ISBN:https://doi.org/10.48550/arXiv.2507.12185 / Published by ArXiv / Version released on 2025-07-16 / on (web) Publishing site


EthicAlly: a Prototype for AI-Powered Research Ethics Support for the Social Sciences and Humanities / 2508.00856 / ISBN:https://doi.org/10.48550/arXiv.2508.00856 / Published by ArXiv / Version released on 2025-07-15 / on (web) Publishing site


A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems / 2508.07407 / ISBN:https://doi.org/10.48550/arXiv.2508.07407 / Published by ArXiv / Version released on 2025-08-31 / on (web) Publishing site


Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study / 2508.20244 / ISBN:https://doi.org/10.48550/arXiv.2508.20244 / Published by ArXiv / Version released on 2025-08-27 / on (web) Publishing site


Beyond Prediction: Reinforcement Learning as the Defining Leap in Healthcare AI / 2508.21101 / ISBN:https://doi.org/10.48550/arXiv.2508.21101 / Published by ArXiv / Version released on 2025-08-28 / on (web) Publishing site


ArGen: Auto-Regulation of Generative AI via GRPO and Policy-as-Code / 2509.07006 / ISBN:https://doi.org/10.48550/arXiv.2509.07006 / Published by ArXiv / Version released on 2025-09-06 / on (web) Publishing site


TVS Sidekick: Challenges and Practical Insights from Deploying Large Language Models in the Enterprise / 2509.26482 / ISBN:https://doi.org/10.48550/arXiv.2509.26482 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site


The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs / 2506.11094 / ISBN:https://doi.org/10.48550/arXiv.2506.11094 / Published by ArXiv / Version released on 2025-10-30 / on (web) Publishing site


Navigating the Ethical and Societal Impacts of Generative AI in Higher Computing Education / 2511.15768 / ISBN:https://doi.org/10.48550/arXiv.2511.15768 / Version released on 2025-11-19 / on (web) Publishing site


Morality in AI. A plea to embed morality in LLM architectures and frameworks / 2511.20689 / ISBN:https://doi.org/10.48550/arXiv.2511.20689 / Version released on 2025-11-21 / on (web) Publishing site


Human-Centered Artificial Social Intelligence (HC-ASI) / 2511.21044 / ISBN:https://doi.org/10.48550/arXiv.2511.21044 / Version released on 2025-11-26 / on (web) Publishing site


Medical Malice: A Dataset for Context-Aware Safety in Healthcare LLMs / 2511.21757 / ISBN:https://doi.org/10.48550/arXiv.2511.21757 / Version released on 2025-11-24 / on (web) Publishing site


Mind the Gap! Pathways Towards Unifying AI Safety and Ethics Research / 2512.10058 / ISBN:https://doi.org/10.48550/arXiv.2512.10058 / Version released on 2025-12-10 / on (web) Publishing site