The People's AI Action Plan, AI Assurance, and Information Integrity
Plus an upcoming livestream, a Trust & Safety gathering in NYC, and our plans for the Fall
👋Well, hello there. Welcome back to All Tech Is Human’s newsletter. While we hope you are on the beach reading a book from our Responsible Tech Summer Reading List, our heavily caffeinated team has been moving at the speed of tech with a flurry of reports, livestreams, and setting up numerous initiatives for the Fall. We also recently crossed 13k members across 110 countries on our Slack, and it has been inspiring to see people meet up in cities across the globe using it.
Last week, we spoke with the Co-Executive Directors of the AI Now Institute about the People’s AI Action Plan, and we hosted a livestream sharing takeaways from TrustCon. It was painfully clear from both livestreams that there needs to be more focus on broad coalition building, which is why we have a Responsible Tech Mixer on Aug 18, a gathering for Trust & Safety professionals on Oct 1st (in collaboration with Cornell Tech), and our Responsible Tech Summit on Oct 27th. We need collective understanding and action.
For those of you who will be in NYC for UNGA (United Nations General Assembly), we are curating a list of events. Add yours here.
👉In today’s newsletter, you’ll read about our new AI Assurance Workshop Ecosystem summary report stemming from London, a mini-guide on AI companions and chatbots, and guidance from our recent project with the UNDP around strengthening information integrity in elections. And we hope you can join us virtually on Aug 14 for a conversation about AI Assurance. We also hope you check out our new explainer video, detailing how our org serves as the backbone and mobilizer for the Responsible Tech movement.
Now, onto the newsletter! 👇
💪Strengthening the AI Assurance Ecosystem
All Tech Is Human, in collaboration with techUK and Trilligent, recently brought together influential voices from across the Responsible AI and AI assurance ecosystem for a strategic conversation focused on sustained engagement and impact, as well as navigating regulatory frameworks for trustworthy and robust AI systems. This was held at techUK in London, with participants from across Europe.
Participants worked together to exchange practical strategies and next steps for effectively communicating assurance priorities across different audiences in the context of the regulatory frameworks being developed across Europe and globally — from executive leadership and investors to journalists, researchers, and policymakers.

The session was aimed at equipping participants with concrete steps for amplifying assurance perspectives, creating compelling narratives, and building the persistence needed to advance Responsible AI practices despite shifting attention cycles and political headwinds.
👋Our organization is currently looking at future Responsible AI-related workshops in SF, NYC, and London. If you are interested in being involved, let us know here.
📜The day before our AI Assurance Workshop, we brought together 250 people for two panel conversations and networking at The Royal Society on the topic of advancing online safety. Read an overview and watch the panel recordings here.
🗣️Let’s continue the momentum…join our livestream on Aug 14
Building on the roundtables during our recent workshop in London, we will be discussing:
How can we transform AI assurance from a compliance exercise into a strategic advantage that resonates with decision-makers across sectors?
How are we navigating the dramatically different regulatory landscapes in the U.S., UK, and EU now?
Which strategies are gaining traction, and what's the business case for adoption?
What approaches (partnerships, narrative techniques) can create sustained momentum for AI assurance that withstands shifting attention cycles and political headwinds?
📜Read our new mini-guide on AI companions and chatbots!
The issues related to AI companions and chatbots are rapidly evolving, and touch upon design, linguistics, psychology, ethics, law, and more. That’s why it is essential to have a robust multistakeholder, multidisciplinary Responsible Tech ecosystem and provide a mini-guide that covers:
An overview of AI companions
Key pivot points
Major public concerns
Responses to concerns
What you should know
Resources to learn more •Groups to follow
Ways to stay involved in this evolving conversation
This resource was led by our Princeton University GradFUTURES Social Impact Fellow, Rose Guingrich, in collaboration with ATIH’s Sandra Khalil. It also builds on an earlier ATIH livestream that Rose moderated featuring Kim Malfacini (OpenAI), Sam Hiner (Young People’s Alliance), and Henry Shevlin (Leverhulme Centre for the Future of Intelligence).
👋Do you have related resources we should know about? Would you like to be involved in future projects around AI companions and chatbots? Fill out our interest form.
💡AI companions and chatbots will be a major topic of discussion at our upcoming Responsible Tech Summit on October 27 in NYC.
🤝Strengthening Information Integrity in the Age of AI: Our Partnership with UNDP’s Action Coalition
All Tech Is Human is proud to share reflections and recommendations emerging from our ongoing partnership with the United Nations Development Programme (UNDP)’s Action Coalition on Information Integrity in Elections. This collaboration comes at a pivotal moment for the future of democracy and governance in tech-enabled contexts.
One of the defining challenges of this moment, which was mentioned frequently during this collaboration, is the proliferation of generative AI, including the rapid rise of deepfakes, manipulated media, and AI-amplified disinformation. These technologies are increasingly being weaponized to undermine trust, confuse voters, and disrupt the flow of credible information during critical democratic processes.
In February, All Tech Is Human’s Sandra Khalil and David Ryan Polgar, along with advisor Leah Ferentinos, participated in the 2025 Action Coalition Strategic Dialogue that was held in Madrid. This project has also been heavily informed by our Senior Fellow for Information Integrity, Alexis Crews.
🌏To continue this work, Alexis Crews has been leading our Global Election Guide Series. Read the new election guide on the 2025 Haitian election.
All Tech Is Human’s Rebekah Tweed spoke with AI Now Institute’s Co-Executive Directors Amba Kak and Sarah Myers West on the launch of the People's AI Action Plan, which is intended to deliver on public well-being, shared prosperity, a sustainable future, and security for all.
All Tech Is Human’s Sandra Khalil spoke with Alisar Mustafa (Head of AI Policy at Duco) and Theodora Skeadas (Community Policy Manager at DoorDash, advisor with ATIH) about takeaways from this year’s TrustCon, along with the future direction of Trust & Safety.
📆Coming soon from All Tech Is Human…
Our Responsible Tech Mixer in NYC on August 18th will feature a live podcast taping of Reid Blackman’s Ethical Machines. Reid will be in conversation with David Ryan Polgar, Founder & President of All Tech Is Human.
The next edition of the Responsible Tech Guide will also be available in print! We are currently conducting interviews with individuals whose careers intersect between fields (Responsible AI, Trust & Safety), disciplines, and backgrounds. Nominate someone here.
All Tech Is Human and Cornell Tech's Security, Trust, and Safety Initiative (SETS) will host a Trust and Safety Careers event on Oct 1st at Cornell Tech (NYC). This gathering will bring together a mixture of established Trust and Safety professionals and those looking to grow their career in the field, and will feature a panel and networking. Express your interest here.
Our annual marquee gathering, the Responsible Tech Summit, will happen on October 27th in NYC for 265 individuals across civil society, government, industry, and academia. With a focus on centering humanity in our tech future, key topics will include reasserting agency, AI companions and chatbots, AI & copyright, and the intersection of Trust & Safety and Responsible AI. We are currently lining up speakers, sponsors, and applications for participation.
Our Responsible AI course, with instruction led by Professor Renée Cummings, an award-winning artificial intelligence innovator, noted AI ethicist, and the first data activist-in-residence at the University of Virginia’s (UVA) School of Data Science, will arrive in October. Express your interest here.
We’re continuing to map the field of Responsible Tech. Throughout the coming weeks, we’ll be releasing insights from our Siegel Research Fellow, Deb Donig. If you work in Responsible Tech, we’d love your help by taking a 25-min survey.
💙Together, we tackle the world’s thorniest tech & society issues
⭐ Our projects & links | Our network | Email us | Donate to ATIH | Our mission
🦜 Looking to chat with others in Responsible Tech after reading our newsletter? Join the conversations happening on our Slack (sign in | apply).
💪 Are you part of a foundation that wants to support our mission? Reach out directly to David Ryan Polgar and help strenghten the Responsible Tech ecosystem.




