Six Categories of Concern with AI Companions
Plus our upcoming livestream on Agentic AI, insight into our Responsible AI course, and more!
Hello, and welcome back to All Tech Is Human’s newsletter. Are you ready to dig in?
In today’s version, you’ll read our Six Categories of Concerns with AI Companions, which stems from analyzing the feedback of over 150 individuals. This type of project gets at our unique ability to surface important values, tensions, and tradeoffs from our multistakeholder, multidisciplinary network.
We’re thrilled that our highly participatory model for tackling thorny tech & society issues, while moving at the speed of tech, was just profiled in NYU Alliance for Public Interest Technology’s Spotlight Series. Our organization acts as a network-of-networks that leverages collective intelligence, involvement, and action as we co-create a tech future aligned with the public interest.
In today’s newsletter, you’ll also read about our livestream on August 28, an upcoming Trust & Safety gathering in NYC on October 15, and our Responsible Tech Summit: Centering Humanity in Our Tech Future, which occurs on October 27 in NYC.
We’re continuing to work on our next Responsible Tech Guide and our upcoming Responsible AI course—both set for release in October. We’ve also updated our Responsible AI, Trust & Safety, and Public Interest Technology pages to reflect our recent reports and other activities. Oh, and if you are looking for an All Tech Is Human t-shirt or mug, we now have ATIH merch; you can find a shortcut to all of our initiatives and gatherings here.
Now, onto the newsletter! 👇
We're thrilled today to release new insight from our ongoing exploration into AI companions
Examining over 150 responses about the most important issues related to AI companions, six key themes emerged:
1️⃣Emotional and Psychological Impact
2️⃣Human Relationships and Social Skills
3️⃣Privacy and Data Security
4️⃣Safety and User Vulnerability
5️⃣Credibility, Trust, and Transparency
6️⃣Ethical and Business Model Conflicts
“Without proper guardrails, AI companions - or agents - can pose issues around privacy and data security, transparency and explainability, ethical design and responsible AI practices, and even managing user expectations (across geographies, cultures, digital literacy levels, etc). As AI agents develop, some of the broader concerns around psychological impacts, ethical boundaries on the human-AI relationship, and even emotional dependency also come into play. From a regulatory and industry POV, this also presents edge cases around liability and shared responsibility, which is why it’s critical for the public and private sector to work together to advance responsible and ethical AI norms."-Shahla Naimi, Senior Policy Director, Office of Ethical and Humane Use, Salesforce.
RELATED: Shahla Naimi was a speaker at our May gathering on Strengthening Multistakeholder Collaboration in Responsible AI in collaboration with the Finnish Consulate. Others included Miranda Bogen (Director, AI Governance Lab, Center for Democracy & Technology), Lucia Velasco (Head of AI Policy, UN Office for Digital & Emerging Technologies), Serena Oduro (Policy Manager, Data & Society), and more.
We’re discussing the challenges facing Agentic AI and hope you can join us!
2025 is slated to be the year of AI agents. These systems, capable of independent action and decision-making, represent a paradigm shift in artificial intelligence. While offering immense potential, they also introduce significant challenges around trust, safety, privacy, and user-agent dynamics. For this livestream, you will hear from expert Leah Ferentinos, Strategic Advisor with All Tech Is Human, in conversation with ATIH’s Associate Director Sandra Khalil.
We will delve into questions of autonomy, accountability, and the potential for unintended consequences. How do we ensure these systems align with human values and societal norms? How do we safeguard user privacy? How do we address concerns around user safety, personification of AI, and other potential social ramifications? How do we mitigate the risks of misuse and malicious actors?
Five Questions for Renée Cummings
You may know Renée Cummings as an award-winning AI Ethicist, global speaker, inspiring professor, and previous fellow with All Tech Is Human. Renée is helping shape and instruct our forthcoming Responsible AI Course.
We asked her a few questions about the difficulty of keeping up with the speed of innovation, key turning points that have shaped the Responsible AI movement, competing priorities in AI Governance, applications of RAI principles that have made a tangible difference, and what skills she is hoping that participants in our course will learn. Read below!
“Responsible AI is also as much about education as it is about empowerment—through literacy, adaptability, and the ability to augment or reinvent skills. The more people understand the opportunities offered through Responsible AI, the more empowered they become to shape technologies that serve humanity with fairness, accountability, compassion, and vision.” -Renée Cummings, Award-Winning AI Ethicist and instructor for All Tech Is Human's forthcoming Responsible AI Course




Thank you to everyone who packed the house in NYC Monday night for our casual Responsible Tech Mixer + live podcast taping of Ethical Machines! Host and author Reid Blackman sat down with All Tech Is Human’s David Ryan Polgar for a freewheeling conversation about his fear that AI could lead us to a sad future, the inspiration behind All Tech Is Human, and, most importantly, how we can come together to ensure our future embraces joy, creativity, and human connection.
Be on the lookout for the podcast episode, which is scheduled to arrive in September.
“All Tech Is Human’s emphasis on human agency and democratic participation offers an alternative to both uncritical tech adoption and reactionary rejection. By insisting that “if you’re impacted by technology, you should have a seat at the proverbial table,” the organization provides a framework for more inclusive technological governance that recognizes technology’s social nature rather than treating it as an external force beyond human control.” -Mythili Sampathkumar, Challenging Silicon Valley’s Tech Determinism: How ‘All Tech Is Human’ Rewrites the Rules
In Case You Missed It…
We held a livestream last week exploring ways to strengthen the AI Assurance, bringing on participants from our recent London workshop at techUK with Trilligent. You can download a summary report from the workshop here.
As part of our collaboration with the United Nations Development Programme (UNDP)’s Action Coalition on Information Integrity in Elections, we authored actionable recommendations for tech platforms, election management bodies, government agencies, and civil society organizations.
All Tech Is Human and Cornell Tech's Security, Trust, and Safety Initiative (SETS) will host a Trust and Safety Careers event on Oct 1st at Cornell Tech (NYC). This gathering will bring together a mixture of established Trust and Safety professionals and those looking to grow their career in the field, and will feature a panel and networking. Express your interest here.
Our annual marquee gathering, the Responsible Tech Summit, will happen on October 27th in NYC for 265 individuals across civil society, government, industry, and academia. With a focus on centering humanity in our tech future, key topics will include reasserting agency, AI companions and chatbots, AI & copyright, and the intersection of Trust & Safety and Responsible AI. We are currently lining up speakers, sponsors, and applications for participation. We are also lining up sponsors; learn more here.
We’re continuing to map the field of Responsible Tech. Throughout the coming weeks, we’ll be releasing insights from our Siegel Research Fellow, Deb Donig. If you work in Responsible Tech, we’d love your help by taking a 25-min survey.
Should we upload human consciousnesses to synthetic bodies in the future?! Founder David Ryan Polgar was quoted in Mashable, giving his thoughts.
💙Together, we tackle the world’s thorniest tech & society issues
⭐ Our projects & links | Our network | Email us | Donate to ATIH | Our mission
🦜 Looking to chat with others in Responsible Tech after reading our newsletter? Join the conversations happening on our Slack (sign in | apply).
💪 Are you part of a foundation that wants to support our mission? Reach out directly to David Ryan Polgar and help strengthen the Responsible Tech ecosystem.






