Responsible Tech Community report, Safety by Design gathering
Plus an upcoming livestream, and an opportunity to join our AI Governance workshops and task forces
👋Well, hello there. Welcome to our latest All Tech Is Human newsletter, where we detail the numerous ways you can get involved in the Responsible Tech movement. We deeply believe that a stronger ecosystem leads to an enhanced ability to tackle thorny tech & society issues, which is why you’ll always see are activities centered on community-building, educational resources, and career activities. We’re not reinventing the wheel, we’re building rocket.
🎊Today we are thrilled to release our Responsible Tech Community report, which features profiles of 25 individuals in the ecosystem along with descriptions of over 100 Responsible Tech orgs. We hope you value community—and see its relevance to tackling complex issue—as much as we do.
🎊We are also announcing a Safety by Design gathering for 200 individuals to happen in NYC on April 23rd. In the coming newsletters, we will also be announcing key gatherings in London, San Francisco, and Washington, DC. As always, reach out our way with your ideas and suggestions. And for virtual gatherings, please join our next livestream happening Feb 13.
Now, onto the newsletter! 👇
📜Download our new Responsible Tech Community report featuring 25 interviews!
For our newest report, we dug into both the value of community and also aimed to understand the various ways that others go about building community. And, as we do in all of our reports, we asked interviewees about their own vision for a better tech future. If we are going to co-create a tech future aligned with the public interest, it behooves us to understand and weave together all of the different perspectives and outlooks. We would love for you to read our Responsible Tech Community report and share it with others who should be involved.
In our report, you will read interviews from Tazin Khan (Founder & CEO of Cyber Collective), Lara Galinsky (Head of Partnerships at Project Liberty), Zamaan Qureshi (Campaigns Associate at Accountable Tech; Co-Chair, Design It For Us), Aimee Bataclan (Head of Communications at Partnership on AI), and many more.
👏We’d love your help in expanding the reach our our Responsible Tech Community report. Here is our post about it on LinkedIn to share.
🎙️Looking for a podcast summary of the 25 profile interviews?! We utilized Google’s NotebookLM to create this podcast discussion.
🦺We are thrilled to announce a Safety by Design gathering for April 23 in NYC
All Tech Is Human will be setting up a curated gathering for 200 individuals across civil society, government, industry, and academia on Wednesday, April 23rd in NYC. This follows in the wake of our recent report, Balancing Privacy and Child Safety in Encrypted Environments, that utilized the Safety by Design framework (popularized by Australia’s eSafety Commissioner Julie Inman-Grant, who delivered a fireside chat at our gathering last Sept with Safe Online).
“Safety by Design is an approach which puts user safety and rights at the center of the design and development of products and services. The goal is to anticipate and prevent harm which might occur while using products, rather than trying to implement remedies after the harm has occurred.” - INHOPE
This upcoming gathering will also feature updates about our ongoing collaboration with Thorn on developing and socializing standards for reducing AI-generated CSAM. Companies such as Google, Meta, OpenAI, Anthropic, Amazon, and Microsoft have publicly endorsed the principles found here.
Right now we are collecting general interest, along with panel and speakers suggestions. In the coming newsletters, we will announce the programming.
💡RELATED RESOURCE: Watch the panel we held last year on Safety by Design for Generative AI with Dr. Rebecca Portnoff (Head of Data Science, Thorn), Afrooz Kaviani Johnson (Child Protection Specialist, UNICEF), Sean Litton (President and Chief Executive Officer, Tech Coalition), Juliet Shen (Research Associate for Columbia University's Trust and Safety Tools Consortium), and moderator Matt Soeth.
👋Join our livestream on Feb 13th as we discuss taking back control of our online lives
How do we ensure our agency with social media platforms and emerging technology? In our latest installment of our Responsible Tech Author Series, we bring on author Mara Einstein. Mara is the author of the newly-released book, Hoodwinked: How marketers use the same tactics as cults.
Dr. Mara Einstein is an internationally recognized expert on deceptive marketing tactics. She worked in corporate marketing for advertising agencies, MTV Networks, and NBC before transitioning to academia. Hoodwinked was also selected for a book talk at SXSW 2025. You may recognize Dr. Einstein from "Buy Now! The Shopping Conspiracy," the Netflix documentary about overconsumption, marketing, and its impact on society and the planet.
🗺️We are currently mapping out multiple Responsible AI Governance projects
Are you involved in Responsible AI governance? If so, we would love to learn about your background as we put together workshops (in-person and virtual) and task forces related to Responsible AI governance. Here is our interest form.
These upcoming workshops are taking some of the learnings from our previous workshop last September; we recently released a report about learning from our workshop that you can read here.
🗣️A few days ago, our organization curated a private gathering of 35 leaders in Responsible AI. When posed with the question about what individuals are excited or concerned about, here are some responses:
“I’m excited to see how companies comply with the DSA. This is the first year where compliance will be tested, and transparency reports are audited.” - Manojit Nandi, Senior Data Scientist at Spotify
"I’m excited about developing shared terminology for governing large models!" - Teresa Datta, Machine Learning Research Engineer
"I’m excited about defining AI Governance! Let's get some operational definitions that all stakeholders can get on board with!"’ - Lauri Goldkind, Professor at Fordham University
“I’m concerned about deepening the digital divide, due to a lack of investment in accessibility, inclusion, connectivity” - Shruthi Velidi, U.S. Fulbright Scholar
"I’m concerned about the issues around the anthropomorphizing on AI systems and dangerous deployment of AI systems that increase the global psychosocial risks of detachment from reality (e.g. Dual Consciousness states of mind), which in turn can foster manipulation, misinformation, disinformation, radicalization, and weaponization." - Marisa Zalabak, Co-Chair AI Ethics Education, IEEE
📜Read a write up about this recent Responsible AI mixer here.
📺And on the theme of Responsible AI, above you can watch the video from our recent livestream with Arvind Narayanan, author of AI Snake Oil.
🗳️Are you involved in information integrity in elections? All Tech Is Human is working with the United Nations Development Programme (UNDP)
All Tech Is Human is currently assisting with an invitation-only Expert Dialogue (Feb 25 in Madrid) for in-depth discussions on collaborative mechanisms and best practices in strengthening information integrity. David Ryan Polgar and Sandra Khalil from ATIH will be in Madrid for this special gathering.
We are currently looking for individuals from major platforms who are interested in being involved in this important initiative. The focus will be on improving the effectiveness and relevance of engagement with online platforms on issues related to elections, along with strengthening partnerships and collaborative relationships among electoral stakeholders.
Meet Nicole Cuneo, one of our Princeton University Fellows
All Tech Is Human currently has three Princeton University Fellows, from their GradFUTURES Social Impact program. You may have seen Nicole recently moderating one of our recent livestreams (Being a Changemaker in Responsible Tech). Read about how Nicole’s research has influenced how she thinks about the relationship between technology and society, advice to aspiring professionals in Responsible Tech, and more.
👀 In Case You Missed It…
All Tech Is Human moves at the speed of tech…here are some recent happenings and projects for you to get involved with.
We are organizing a curated gathering for 50 leaders in online safety on Feb 11th (Safer Internet Day) in NYC. Apply here.
🥐Will you be at the Paris AI Action Summit? Our friends at Humane Intelligence have created an AI & Society House alongside the summit which will feature six panel conversations and expos. Learn more here.
Speaking of Paris, there is an ATIHx gathering happening tomorrow in collaboration with Women in Safety and Ethics (WISE). ATIHx are independently organized gatherings for our community, leveraging our community tools.
And speaking of ATIHx, Toronto will be holding their first ATIHx gathering on Feb 19.
We’re convening a curated gathering for 75 leaders to focus on strengthening multistakeholder collaboration in regards to Responsible AI. Happening on May 21st at the Finnish Consulate + Residence in New York. Apply here.
We are working hard to put together a London gathering for 250 people in the late Spring/early summer. Interested? Let us know here.
Nakshathra Suresh, an affiliate with ATIH, will be speaking (virtual) this Thursday at Beyond Bureaucracy: Applied AI in the Criminal Justice System.
🔭Are you exploring new roles in Responsible Tech? Outside of our popular Responsible Tech Job Board and the hundreds of roles being shared through our Slack (sign in | apply), we recently started a separate careers newsletter you can join.
💡All Tech Is Human’s whole-of-ecosystem approach involves three major areas of focus: multistakeholder community-building, educational resources, and diversifying the Responsible Tech pipeline.
💙Together, we tackle the world’s thorniest tech & society issues
⭐ Our projects & links | Year in Review 2024 | Our network | Email us
🦜 Looking to chat with others in Responsible Tech after reading our newsletter? Join the conversations happening on our Slack (sign in | apply).
💪 We’ve built the world’s largest multistakeholder, multidisciplinary network in Responsible Tech and now we’re putting it to work. This powerful network allows us to tackle the thorniest tech & society through collective understanding, involvement, and action. Are you part of a foundation that wants to support our mission? Reach out directly to David Ryan Polgar.