The alignment problem : machine learning and human values /

"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alar...

Descripción completa

Detalles Bibliográficos
Autor principal: Christian, Brian, 1984-
Formato: Libro
Lenguaje:Inglés
Publicado: New York : W.W. Norton, 2021.
Materias:
Aporte de:Registro referencial: Solicitar el recurso aquí
LEADER 02597nam a2200349 a 4500
001 991035227904151
005 20250314162304.0
008 250225t20212020nyu b 001 0 eng d
020 |a 9780393868333 $q (paperback) 
020 |a 0393868338 $q (paperback) 
035 |a (OCoLC)1501951789 
035 |a (OCoLC)on1501951789 
040 |a U@S  |b spa  |c U@S 
049 |a U@SA 
050 4 |a Q334.7  |b .C47 2021 
100 1 |a Christian, Brian,  |d 1984- 
245 1 4 |a The alignment problem :  |b machine learning and human values /  |c Brian Christian. 
246 3 0 |a Machine learning and human values 
264 1 |a New York :  |b W.W. Norton,  |c 2021. 
300 |a xvi, 476 pages ;  |c 21 cm. 
500 |a "Brian Christian: best-selling author, Algorithms to Live by."--Cubierta. 
504 |a Incluye referencias bibliográficas (401-451) e índice. 
505 0 |a I. Prophecy --1. Representation -- 2. Fairness -- 3.Transparency -- II. Agency -- 4.Reinforcement -- 5. Shaping -- 6. Curiosity -- III.Normativity --7.Imitation --8. Inference --9. Uncertainty -- Conclusion. 
520 |a "A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--Descripción del editor. 
650 0 |a Artificial intelligence  |x Moral and ethical aspects. 
650 0 |a Artificial intelligence  |x Social aspects. 
650 0 |a Machine learning  |x Safety measures. 
650 0 |a Software failures. 
650 7 |a Inteligencia artificial  |x Aspectos éticos y morales.  |2 UDESA 
650 7 |a Inteligencia artificial  |x Aspectos sociales.  |2 UDESA 
650 7 |a Aprendizaje automático  |x Medidas de seguridad.  |2 UDESA 
650 7 |a Fallas de software.  |2 UDESA