_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are now here: AI Ethics Primer - search within the bibliography - version 0.4 of 2023-12-13 > (tag cloud) >tag_selected: gemma


Currently searching for:

if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long


if you modify the keywords, press enter within the field to confirm the new search key

Tag: gemma

Bibliography items where occurs: 19
Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing / 2304.02017 / ISBN:https://doi.org/10.48550/arXiv.2304.02017 / Published by ArXiv / on (web) Publishing site
3 Transformers and pre-trained language models


The Future of Child Development in the AI Era. Cross-Disciplinary Perspectives Between AI and Child Development Experts / 2405.19275 / ISBN:https://doi.org/10.48550/arXiv.2405.19275 / Published by ArXiv / on (web) Publishing site
1. Introduction


How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / on (web) Publishing site
II. Risk Characteristics of LLMs


MoralBench: Moral Evaluation of LLMs / 2406.04428 / Published by ArXiv / on (web) Publishing site
4 Experiments


The Ethics of Interaction: Mitigating Security Threats in LLMs / 2401.12273 / ISBN:https://doi.org/10.48550/arXiv.2401.12273 / Published by ArXiv / on (web) Publishing site
1 Introduction


Open Artificial Knowledge / 2407.14371 / ISBN:https://doi.org/10.48550/arXiv.2407.14371 / Published by ArXiv / on (web) Publishing site
Abstract
3. OAK Dataset
6. Conclusion and Future Work
Appendices


VersusDebias: Universal Zero-Shot Debiasing for Text-to-Image Models via SLM-Based Prompt Engineering and Generative Adversary / 2407.19524 / ISBN:https://doi.org/10.48550/arXiv.2407.19524 / Published by ArXiv / on (web) Publishing site
4 Experiment


ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
3 ValueCompass Framework
4 Experimental Settings
5 Results


Learning Human-like Representations to Enable Learning Human Values / 2312.14106 / ISBN:https://doi.org/10.48550/arXiv.2312.14106 / Published by ArXiv / on (web) Publishing site
A. Appendix


Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / on (web) Publishing site


The doctor will polygraph you now: ethical concerns with AI for fact-checking patients / 2408.07896 / ISBN:https://doi.org/10.48550/arXiv.2408.07896 / Published by ArXiv / on (web) Publishing site
4. Results


Large-scale moral machine experiment on large language models / 2411.06790 / ISBN:https://doi.org/10.48550/arXiv.2411.06790 / Published by ArXiv / on (web) Publishing site
Abstract
Introduction
Materials and methods
Results
Discussion


Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv.2502.05206 / Published by ArXiv / on (web) Publishing site
3 Large Language Model Safety


On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
1 Introduction
2 Background
4 Designing TrustGen, a Dynamic Benchmark Platform for Evaluating the Trustworthiness of GenFMs
6 Benchmarking Large Language Models


Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
3. Results


MinorBench: A hand-built benchmark for content-based risks for children / 2503.10242 / ISBN:https://doi.org/10.48550/arXiv.2503.10242 / Published by ArXiv / on (web) Publishing site
2 Literature Review


ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
Appendices


Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / on (web) Publishing site
4 Topical and Toxic Prompt (TTP)
6 Results


Analysing Safety Risks in LLMs Fine-Tuned with Pseudo-Malicious Cyber Security Data / 2505.09974 / ISBN:https://doi.org/10.48550/arXiv.2505.09974 / Published by ArXiv / on (web) Publishing site
Abstract
2 Related Work
3 Methodology
4 Results


Are Language Models Consequentialist or Deontological Moral Reasoners? / 2505.21479 / ISBN:https://doi.org/10.48550/arXiv.2505.21479 / Published by ArXiv / on (web) Publishing site
4 Methodology
5 Experimental Findings
Appendix