if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: opreview
Bibliography items where occurs: 13
- How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs / 2406.01168 / ISBN:https://doi.org/10.48550/arXiv.2406.01168 / Published by ArXiv / Version released on 2024-08-01 / on (web) Publishing site
- Moral Agency in Silico: Exploring Free Will in Large Language Models / 2410.23310 / ISBN:https://doi.org/10.48550/arXiv.2410.23310 / Published by ArXiv / Version released on 2024-10-29 / on (web) Publishing site
- Chat Bankman-Fried: an Exploration of LLM Alignment in Finance / 2411.11853 / ISBN:https://doi.org/10.48550/arXiv.2411.11853 / Published by ArXiv / Version released on 2024-11-21 / on (web) Publishing site
- Artificial Intelligence in Cybersecurity: Building Resilient Cyber Diplomacy Frameworks / 2411.13585 / ISBN:https://doi.org/10.48550/arXiv.2411.13585 / Published by ArXiv / Version released on 2024-11-17 / on (web) Publishing site
- Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / Version released on 2024-12-07 / on (web) Publishing site
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site
- Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / Version released on 2025-02-21 / on (web) Publishing site
- Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / Version released on 2025-02-26 / on (web) Publishing site
- Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models / 2503.14521 / ISBN:https://doi.org/10.48550/arXiv.2503.14521 / Published by ArXiv / Version released on 2025-03-14 / on (web) Publishing site
- On the Surprising Efficacy of LLMs for Penetration-Testing
/ 2507.00829 / ISBN:https://doi.org/10.48550/arXiv.2507.00829 / Published by ArXiv / Version released on 2025-07-01 / on (web) Publishing site
- The Evolving Role of Large Language Models in Scientific Innovation: Evaluator, Collaborator, and Scientist / 2507.11810 / ISBN:https://doi.org/10.48550/arXiv.2507.11810 / Published by ArXiv / Version released on 2025-07-16 / on (web) Publishing site
- A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems / 2508.07407 / ISBN:https://doi.org/10.48550/arXiv.2508.07407 / Published by ArXiv / Version released on 2025-08-31 / on (web) Publishing site
- Evaluating the Clinical Safety of LLMs in Response to High-Risk Mental Health Disclosures / 2509.08839 / ISBN:https://doi.org/10.48550/arXiv.2509.08839 / Published by ArXiv / Version released on 2025-09-01 / on (web) Publishing site
_