Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[sea-list] Beth Barnes (OpenAI) on the Safe Development of Transformative AI, This Sunday (Nov. 10) 5-6pm

Kuhan Jeyapragasan kuhanj at stanford.edu
Wed Nov 6 16:53:09 PST 2019


The Stanford Transformative AI Safety Group (STAIS) + Stanford Effective
Altruism invite you to:

[image: Screen Shot 2019-11-06 at 10.06.31 AM.png]

To learn more about the field of AI Safety, read this introduction:
https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Why AI Safety is considered one of the highest-priority EA Cause Areas:
https://www.effectivealtruism.org/articles/potential-risks-from-advanced-ai-daniel-dewey/

For weekly updates on AI Alignment/Safety, sign up for the AI Alignment
Newsletter <http://rohinshah.com/alignment-newsletter/> (by Rohin Shah from
the Center for Human-Compatible AI at UC Berkeley).

Check out our Facebook event
<https://www.facebook.com/events/2483710768572917>.

To sign up for the STAIS mailing list and/or confirm attendance, please
fill out the RSVP form <https://forms.gle/UH5uZJ4Memi6QStM6>.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/think-list/attachments/20191106/3edaabea/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2019-11-06 at 10.06.31 AM.png
Type: image/png
Size: 472287 bytes
Desc: not available
URL: <http://mailman.stanford.edu/pipermail/think-list/attachments/20191106/3edaabea/attachment-0001.png>


More information about the think-list mailing list