<div dir="ltr"><div style="color:rgb(0,0,0);text-align:center"><font size="4" color="#cc0000">The Stanford Transformative AI Safety Group (STAIS) + Stanford Effective Altruism invite you to:</font></div><div style="color:rgb(0,0,0);text-align:center"><font size="4"><br></font></div><div style="text-align:center"><div><font size="4"><img src="cid:ii_k2nv8zaw0" alt="Screen Shot 2019-11-06 at 10.06.31 AM.png" width="418" height="542" class="gmail-CToWUd gmail-a6T" tabindex="0" style="cursor: pointer; outline: 0px;"><br></font></div></div><div style="text-align:center"><div style="color:rgb(0,0,0)"><font size="4"><br></font></div><div style="color:rgb(0,0,0)"><font color="#000000" face="arial, sans-serif" size="4">Check out our <a href="https://www.facebook.com/events/2483710768572917" target="_blank">Facebook event</a>.</font></div><div style="color:rgb(0,0,0)"><font color="#000000" face="arial, sans-serif" size="4"><br></font></div><div style="color:rgb(0,0,0)"><font face="arial, sans-serif" size="4">To sign up for the STAIS mailing list and/or confirm attendance, please fill out the <a href="https://forms.gle/UH5uZJ4Memi6QStM6" target="_blank">RSVP form</a>.</font></div><div style="color:rgb(0,0,0)"><div><div><font color="#666666" size="4"><br></font></div><div><span style="color:rgb(33,33,33);font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start">To learn more about the field of AI Safety, read this introduction:</span><br style="color:rgb(33,33,33);font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start"><a href="https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/" title="https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/" target="_blank" style="display:inline;text-decoration-line:none;font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start">https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/</a><br style="color:rgb(33,33,33);font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start"><br style="color:rgb(33,33,33);font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start"><span style="color:rgb(33,33,33);font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start">Why AI Safety is considered one of the highest-priority EA Cause Areas: </span><a href="https://www.effectivealtruism.org/articles/potential-risks-from-advanced-ai-daniel-dewey/" title="https://www.effectivealtruism.org/articles/potential-risks-from-advanced-ai-daniel-dewey/" target="_blank" style="display:inline;text-decoration-line:none;font-family:"Graphik Meetup",-apple-system,system-ui,Roboto,Helvetica,Arial,sans-serif;text-align:start">https://www.effectivealtruism.org/articles/potential-risks-from-advanced-ai-daniel-dewey/</a></div></div><div><div><br></div><div>For weekly updates on AI Alignment/Safety, sign up for the <a href="http://rohinshah.com/alignment-newsletter/" target="_blank">AI Alignment Newsletter</a> (by Rohin Shah from the Center for Human-Compatible AI at UC Berkeley).</div><div></div></div></div></div></div>