if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: barocas
Bibliography items where occurs: 17
- Regulation and NLP (RegNLP): Taming Large Language Models / 2310.05553 / ISBN:https://doi.org/10.48550/arXiv.2310.05553 / Published by ArXiv / on (web) Publishing site
- 3 LLMs: Risk and Uncertainty
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Measuring Fairness Metrics - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies / 2304.07683 / ISBN:https://doi.org/10.48550/arXiv.2304.07683 / Published by ArXiv / on (web) Publishing site
- I. Introduction
IV. Mitigation strategies for bias in AI
V. Fairness in AI
VI. Mitigation strategies for fairness in AI - Responsible AI Research Needs Impact Statements Too / 2311.11776 / ISBN:https://doi.org/10.48550/arXiv.2311.11776 / Published by ArXiv / on (web) Publishing site
- Concluding Reflections
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- 4 Fairness and equity
- Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- 4 Fairness metrics landscape in machine learning
- The Journey to Trustworthy AI- Part 1 Pursuit of Pragmatic Frameworks / 2403.15457 / ISBN:https://doi.org/10.48550/arXiv.2403.15457 / Published by ArXiv / on (web) Publishing site
- 6 Bias and Fairness
- The Narrow Depth and Breadth of Corporate Responsible AI Research / 2405.12193 / ISBN:https://doi.org/10.48550/arXiv.2405.12193 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
3 Motivations for Industry to Engage in Responsible AI Research - A Blueprint for Auditing Generative AI / 2407.05338 / ISBN:https://doi.org/10.48550/arXiv.2407.05338 / Published by ArXiv / on (web) Publishing site
- 2 Why audit generative AI systems?
- Auditing of AI: Legal, Ethical and Technical Approaches / 2407.06235 / Published by ArXiv / on (web) Publishing site
- 4 Auditing of AI’s multidisciplinary foundations
- Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
- 3 Uncertainty Ex Machina
- Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety / 2410.16562 / ISBN:https://doi.org/10.48550/arXiv.2410.16562 / Published by ArXiv / on (web) Publishing site
- Introduction
Taxonomies of Harm Must be Vernacularized to be Operationalized - Good intentions, unintended consequences: exploring forecasting harms
/ 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
- 2 Harms in forecasting
- Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv.2501.10484 / Published by ArXiv / on (web) Publishing site
- Related Works
- AI Automatons: AI Systems Intended to Imitate Humans / 2503.02250 / ISBN:https://doi.org/10.48550/arXiv.2503.02250 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance / 2503.06411 / ISBN:https://doi.org/10.48550/arXiv.2503.06411 / Published by ArXiv / on (web) Publishing site
- 2 Key Themes from Weapons of Math Destruc-
tion
- Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental / 2503.16534 / ISBN:https://doi.org/10.48550/arXiv.2503.16534 / Published by ArXiv / on (web) Publishing site
- 1 Introduction