![]() ![]() The Alignment Problem, by Brian Christian. The recurring problem is that, recommendation systems are skilled at catering to who you are, but contributes nothing towards who you aspire to be.Īs a practical and initial remedy, we believe as users, you have the right to not see what you do not wish to see, and we want to share this superpower with you via RecAlign, an open source initiative for recommendation alignment. Perhaps you are recovering from alcohol addiction, but the recommendation system knows too well about your love for alcohol and infests your feed with ads for alcohol. Perhaps you signed up for Twitter to keep up with research, but a single click on a funny meme will flood your timeline with similar content. We then use a large language model (LLM) to filter out recommendations that you do not wish to see. Speaker: Brian Christian, Visiting Scholar, University of California, Berkeley Bestselling AuthorIn Conversation With: Joshua Gans, Professor of Strategic M. You can specify your preference for viewing recommendations (e.g., Tweets) in words such as “I like reading about AI research”. We use large language models (LLMs) to vet and remove recommendations according to your explicitly stated preference in a transparent and editable way. We are starting an open source initiative RecAlign (short for Recommendation Alignment) to address this misalignment. Their objective is fundamentally misaligned with yours. Recommendation systems (e.g., Twitter) optimize for your attention and spoil you to the detriment of your own well-being. ![]()
0 Comments
Leave a Reply. |