_
RobertoLofaro.com - Knowledge Portal
Change, with and without technology
for updates on publications, follow @changerulebook on either Instagram or Twitter - you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.3 of 2023-08-13 > (tag cloud) >tag_selected: rlhf


Tag: rlhf

Bibliography items where occurs: 10
if you need more than one keyword, add on the URL each keyword prefixed by _ (underscore)- total up to 50 characters
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation / 2305.11391 / ISBN:https://doi.org/10.48550/arXiv.2305.11391 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Large Language Models
5 Falsification and Evaluation
Reference


Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
I. Introduction
II. Methods and training process of LLMs
VI. Solution architecture for privacy-aware and trustworthy conversational AI


The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie / 2309.02029 / ISBN:https://doi.org/10.48550/arXiv.2309.02029 / Published by ArXiv / on (web) Publishing site
5. Discussion


A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics / 2310.05694 / ISBN:https://doi.org/10.48550/arXiv.2310.05694 / Published by ArXiv / on (web) Publishing site
III. FROM PLMS TO LLMS FOR HEALTHCARE
IV. TRAIN AND USE LLM FOR HEALTHCARE
VI. IMPROVING FAIRNESS, ACCOUNTABILITY, TRANSPARENCY, AND ETHICS
References


STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models / 2310.05563 / ISBN:https://doi.org/10.48550/arXiv.2310.05563 / Published by ArXiv / on (web) Publishing site
3 The applications of STREAM


Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
3 LLMs: Risk and Uncertainty


Specific versus General Principles for Constitutional AI / 2310.13798 / ISBN:https://doi.org/10.48550/arXiv.2310.13798 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 AI feedback on specific problematic AI traits
4 Reinforcement Learning with Good-for-Humanity Preference Models
5 Related Work
6 Discussion
A Model Glossary
D Generalization to Other Traits
G Over-Training on Good for Humanity
H Samples
I Responses on Prompts from PALMS, LaMDA, and InstructGPT


AI Alignment and Social Choice: Fundamental Limitations and Policy Implications / 2310.16048 / ISBN:https://doi.org/10.48550/arXiv.2310.16048 / Published by ArXiv / on (web) Publishing site
Abstract
1 Introduction
2 Reinforcement Learning with Multiple Reinforcers
3 Arrow-Sen Impossibility Theorems for RLHF
4 Implications for AI Governance and Policy
5 Conclusion


Unpacking the Ethical Value Alignment in Big Models / 2310.17551 / ISBN:https://doi.org/10.48550/arXiv.2310.17551 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Risks and Ethical Issues of Big Model
3 Investigating the Ethical Values of Large Language Models
4 Equilibrium Alignment: A Prospective Paradigm for Ethical Value Alignmen


LLMs grasp morality in concept / 2311.02294 / ISBN:https://doi.org/10.48550/arXiv.2311.02294 / Published by ArXiv / on (web) Publishing site
4 The Moral Model
A Supplementary Material