if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: florian
Bibliography items where occurs: 36
- Intelligence Primer / 2008.07324 / ISBN:https://doi.org/10.48550/arXiv.2008.07324 / Published by ArXiv / Version released on 2025-09-03 / on (web) Publishing site
- Enabling Global Image Data Sharing in the Life Sciences / 2401.13023 / ISBN:https://doi.org/10.48550/arXiv.2401.13023 / Published by ArXiv / Version released on 2024-02-02 / on (web) Publishing site
- Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis / 2406.08695 / ISBN:https://doi.org/10.48550/arXiv.2406.08695 / Published by ArXiv / Version released on 2024-06-12 / on (web) Publishing site
- Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework / 2303.11196 / ISBN:https://doi.org/10.48550/arXiv.2303.11196 / Published by ArXiv / Version released on 2024-07-15 / on (web) Publishing site
- Honest Computing: Achieving demonstrable data lineage and provenance for driving data and process-sensitive policies / 2407.14390 / ISBN:https://doi.org/10.48550/arXiv.2407.14390 / Published by ArXiv / Version released on 2024-07-19 / on (web) Publishing site
- GenAI Advertising: Risks of Personalizing Ads with LLMs / 2409.15436 / ISBN:https://doi.org/10.48550/arXiv.2409.15436 / Published by ArXiv / Version released on 2024-09-23 / on (web) Publishing site
- Ethical software requirements from user reviews: A systematic literature review / 2410.01833 / ISBN:https://doi.org/10.48550/arXiv.2410.01833 / Published by ArXiv / Version released on 2024-09-18 / on (web) Publishing site
- Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / Version released on 2024-10-23 / on (web) Publishing site
- FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv.2502.03826 / Published by ArXiv / Version released on 2025-08-15 / on (web) Publishing site
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site
- Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives
/ 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / Version released on 2026-01-14 / on (web) Publishing site
- DarkBench: Benchmarking Dark Patterns in Large Language Models / 2503.10728 / ISBN:https://doi.org/10.48550/arXiv.2503.10728 / Published by ArXiv / Version released on 2025-03-13 / on (web) Publishing site
- Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation / 2502.05151 / ISBN:https://doi.org/10.48550/arXiv.2502.05151 / Published by ArXiv / Version released on 2026-03-05 / on (web) Publishing site
- From Automation to Autonomy: A Survey on Large Language Models in Scientific Discovery / 2505.13259 / ISBN:https://doi.org/10.48550/arXiv.2505.13259 / Published by ArXiv / Version released on 2025-09-17 / on (web) Publishing site
- On the Surprising Efficacy of LLMs for Penetration-Testing
/ 2507.00829 / ISBN:https://doi.org/10.48550/arXiv.2507.00829 / Published by ArXiv / Version released on 2025-07-01 / on (web) Publishing site
- Exploring Collaboration Patterns and Strategies in Human-AI Co-creation through the Lens of Agency: A Scoping Review of the Top-tier HCI Literature / 2507.06000 / ISBN:https://doi.org/10.48550/arXiv.2507.06000 / Published by ArXiv / Version released on 2025-09-26 / on (web) Publishing site
- When Large Language Models Meet Law: Dual-Lens Taxonomy, Technical Advances, and Ethical Governance / 2507.07748 / ISBN:https://doi.org/10.48550/arXiv.2507.07748 / Published by ArXiv / Version released on 2025-07-10 / on (web) Publishing site
- Artificial Intelligence Governance for Businesses / 2011.10672 / ISBN:https://doi.org/10.48550/arXiv.2011.10672 / Published by ArXiv / Version released on 2025-07-16 / on (web) Publishing site
- Policy-Driven AI in Dataspaces: Taxonomy, Explainability, and Pathways for Compliant Innovation / 2507.20014 / ISBN:https://doi.org/10.48550/arXiv.2507.20014 / Published by ArXiv / Version released on 2025-07-30 / on (web) Publishing site
- Challenges of Trustworthy Federated Learning: What's Done, Current Trends and Remaining Work / 2507.15796 / ISBN:https://doi.org/10.48550/arXiv.2507.15796 / Published by ArXiv / Version released on 2025-07-21 / on (web) Publishing site
- Countering Privacy Nihilism / 2507.18253 / ISBN:https://doi.org/10.48550/arXiv.2507.18253 / Published by ArXiv / Version released on 2025-07-24 / on (web) Publishing site
- Defining ethically sourced code generation / 2507.19743 / ISBN:https://doi.org/10.48550/arXiv.2507.19743 / Published by ArXiv / Version released on 2025-07-26 / on (web) Publishing site
- Development of management systems using artificial intelligence systems and machine learning methods for boards of directors (preprint, unofficial translation) / 2508.03769 / ISBN:https://doi.org/10.48550/arXiv.2508.03769 / Published by ArXiv / Version released on 2025-08-05 / on (web) Publishing site
- On Developers' Self-Declaration of AI-Generated Code: An Analysis of Practices / 2504.16485 / ISBN:https://doi.org/10.48550/arXiv.2504.16485 / Published by ArXiv / Version released on 2025-09-03 / on (web) Publishing site
- Logging Requirement for Continuous Auditing of Responsible Machine Learning-based Applications / 2508.17851 / ISBN:https://doi.org/10.48550/arXiv.2508.17851 / Published by ArXiv / Version released on 2025-08-25 / on (web) Publishing site
- Between a Rock and a Hard Place: Exploiting Ethical Reasoning to Jailbreak LLMs / 2509.05367 / ISBN:https://doi.org/10.48550/arXiv.2509.05367 / Published by ArXiv / Version released on 2025-09-12 / on (web) Publishing site
- Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned / 2509.08852 / ISBN:https://doi.org/10.48550/arXiv.2509.08852 / Published by ArXiv / Version released on 2025-09-08 / on (web) Publishing site
- AI For Privacy in Smart Homes: Exploring How Leveraging AI-Powered Smart Devices Enhances Privacy Protection / 2509.14050 / ISBN:https://doi.org/10.48550/arXiv.2509.14050 / Published by ArXiv / Version released on 2025-09-17 / on (web) Publishing site
- Perceptions of AI Across Sectors: A Comparative Review of Public Attitudes / 2509.18233 / ISBN:https://doi.org/10.48550/arXiv.2509.18233 / Published by ArXiv / Version released on 2025-09-22 / on (web) Publishing site
- Making Power Explicable in AI: Analyzing, Understanding, and Redirecting Power to Operationalize Ethics in AI Technical Practice / 2510.10588 / ISBN:https://doi.org/10.48550/arXiv.2510.10588 / Published by ArXiv / Version released on 2025-10-12 / on (web) Publishing site
- PrivacyBench: A Conversational Benchmark for Evaluating Privacy in Personalized AI / 2512.24848 / ISBN:https://doi.org/10.48550/arXiv.2512.24848 / Published by ArXiv / Version released on 2025-12-31 / on (web) Publishing site
- Reimagining Legal Fact Verification with GenAI: Toward Effective Human-AI Collaboration / 2602.06305 / ISBN:https://doi.org/10.48550/arXiv.2602.06305 / Published by ArXiv / Version released on 2026-02-09 / on (web) Publishing site
- Disclose with Care: Designing Privacy Controls in Interview Chatbots / 2602.01387 / ISBN:https://doi.org/10.48550/arXiv.2602.01387 / Published by ArXiv / Version released on 2026-02-01 / on (web) Publishing site
- Futuring Social Assemblages: How Enmeshing AIs into Social Life Challenges the Individual and the Interpersonal / 2602.03958 / ISBN:https://doi.org/10.48550/arXiv.2602.03958 / Published by ArXiv / Version released on 2026-02-03 / on (web) Publishing site
- Reliable and Responsible Foundation Models: A Comprehensive Survey / 2602.08145 / ISBN:https://doi.org/10.48550/arXiv.2602.08145 / Published by ArXiv / Version released on 2026-02-04 / on (web) Publishing site
- Dark and Bright Side of Participatory Red-Teaming with Targets of Stereotyping for Eliciting Harmful Behaviors from Large Language Models / 2602.19124 / ISBN:https://doi.org/10.48550/arXiv.2602.19124 / Version released on 2026-02-22 / on (web) Publishing site
_