Purpose
This policy guides how Fix The Status Quo (FTSQ) and Proca use artificial intelligence (AI) across our internal operations, product development, and collaboration with clients and vendors.
It defines our principles, safeguards, and boundaries for using AI responsibly, while encouraging critical and strategic thinking rather than prescribing a fixed list of allowed uses.
Audience
This policy applies to the FTSQ team members and collaborators, existing and prospective clients, and the users of Proca tools.
Review
AI technologies evolve rapidly. This policy is a living document. We commit to regularly reviewing its content as technology and laws change, and learning from real-world use and feedback.
1. Our Philosophy
AI is already widely used across our sector. We engage with it critically and selectively, recognising both its potential and its risks.
Our approach is rooted in our core values:
- Privacy and data protection
- Democratic integrity (regarding institutions, citizens and how they relate to each other)
- Care for the environment & climate
- Respect for human rights, human labour and creativity
We believe that AI tools are assistive technologies and systems, not decision-makers. We do not delegate decisions with real-world impact, particularly those affecting people, public discourse, or democratic participation, to AI systems. We use AI technology to support various aspects of our work but under the principle of human oversight and validation.
We are mindful that AI:
- Is trained on data without permissions from creators and authors
- Can reduce critical thinking if used indiscriminately
- Can introduce bias or spread misinformation
- Might have negative impacts on workers, politics and society unless regulated
- Can weaken authenticity in communication, and with it trust between people and institutions
We commit to using AI in ways that support or streamline human work, not replace human creativity, judgement or responsibility. We use AI only where its benefits clearly outweigh risks and negative impacts.
We avoid financially supporting AI providers whose practices conflict with our values, including involvement in mass surveillance, military and war, violations of international law and human rights. We will periodically review providers against these criteria.
2. Risk-Based Approach
We assess AI use cases based on risk:
- Low risk: internal productivity (e.g. summaries, translations)
- Medium risk: content generation, coding assistance, data analysis
- High risk:
- User-facing features of our action tools
- Generation of any political messaging
- User submission moderation systems in the back office
- Anything affecting public participation in campaigns and actions
Higher-risk uses require stricter safeguards, testing, and human oversight.
3. Core Principles
- Privacy & Data Protection
We do not input sensitive or privileged information into generative AI tools.
AI systems used by FTSQ:
- Have no access to personal data of clients, users, or supporters
- Must comply with applicable data protection standards (e.g. GDPR-aligned safeguards)
Where possible, we ensure that inputs are not used to train external models and that third-party tools meet appropriate data protection standards.
- Transparency
We disclose the use of AI where it affects our public-facing content (written or images), campaign outputs (like user input into a campaign, e.g. a petition comment) or the functionality of our action tools.
Where applicable, users and clients can get us to explain in detail how AI is used and opt out of AI-enabled features (e.g. the Snowflake Engine – see below – or machine translations).
- Accountability
Humans at FTSQ remain responsible for all AI-assisted outputs. We review and validate all AI-generated content and code and take responsibility for its factual accuracy, bias, and impact.
If AI-generated outputs are found to be harmful or misleading, we:
- Correct or remove them
- Review the underlying process & learn from it
- Improve our safeguards for the future
We recognise that AI systems can reproduce or amplify bias. In cases when our action tools are assisted by generative AI, we run thorough tests before releasing any feature to the public, review the generated outputs critically, and avoid discriminatory, misleading or otherwise harmful content. We do not rely on AI for knowledge and facts without human verification.
- Verification and Correct Representation
We verify all AI-generated outputs before use, especially when they include factual claims, data, or references. AI outputs are not treated as reliable sources of truth. A human being is always required to review and approve AI-generated outputs before publishing.
We do not use AI-generated or AI-altered images, audio, or video in ways that could be mistaken for real events, people, or documentation. Any AI-generated content must be clearly identifiable as such, e.g. as image or video caption or “alt text”.
- Environmental Responsibility
We recognise the environmental and climate cost of AI systems, in particular the impacts on the use of power and water necessary to run AI data centers.
We aim to minimise unnecessary use of compute-intensive tools where possible, prioritise efficient internal workflows and smaller and more efficient AI models, and consider environmental impact in tool selection.
4. Political Integrity & Democratic Legitimacy
As a provider of digital campaigning tools, we are committed to working to improve citizen engagement and positively impact democratic participation and trust in the democratic process.
We do not support the use of AI (or any other tech or tools) to:
- Simulate grassroots participation (astroturfing)
- Generate fake supporter messages, voices, accounts or identities
- Automate political participation in deceptive ways
- Artificially amplify messages or demands in ways that don’t align with real people’s opinions
- Spread misinformation, disinformation, or propaganda
Our tools are designed to enable real people to take action, not to replace them.
We actively discourage and disallow the use of Proca tools in ways that undermine trust in digital campaigning or create any doubt about whether action-takers are real people.
5. Internal Use of AI
FTSQ team members may use AI tools for:
- Drafting and editing text
- Summarising, e.g. meeting notes
- Translation
- Coding assistance
- Research support
- Initial data analysis
- Generating images or videos when appropriate
- Other tasks based on risk assessment and in line with our guiding principles
AI use which is not permitted:
- Inputting personal, sensitive or confidential data
- Using AI to fabricate facts, sources, events, or create spoofs or deepfakes
- Generating code or content without human oversight and approval
- Sharing or publishing AI-generated content or analysis without review and fact-checking
- Sharing any AI generated content as if it were created by a human
Staff should:
- disable chat history or training where possible when using AI to support their work
- avoid using organisational accounts for external AI tools unless approved
- avoid creating or publishing AI systems (e.g. custom GPTs) that include internal or sensitive data, in particular any kind of personal information
We do not mandate or prohibit the use of specific tools, but staff are expected to follow this policy and apply critical judgement. However, in line with our philosophy, FTSQ does not financially support the use of OpenAI tools (including subscriptions to ChatGPT etc.) due to their involvement with the US military.
6. Use of AI in Proca Products
We use AI selectively within Proca tools, in particular to improve the efficiency and effectiveness of our action tools and the experience of the end user, including:
- Snowflake Engine (message mixing tool, more information available here)
- Text generation tools (to assist the supporter in creating a personal message or a comment, not as replacement)
- AI-supported moderation of user-generated content in campaign backend (in campaigns that include user submissions of text or images)
- Machine translation
- Image generation
- Others
We design AI-supported features with a number of specific considerations and safeguards in place, in order to ensure that supporters can use our tools to express real opinions and share real experience, avoid duplication or outputs that appear bot-like.
The AI moderation system is an optional feature which is intended to assist human review and flag inappropriate or harmful content, similarly to spam filters. Its decisions can be reviewed and challenged by a human moderator.
We test AI-enabled features for bias and discrimination, and accuracy and hallucination risks. We monitor performance over time and restrict or disable features if necessary.
7. Vendor & Tool Selection
When selecting AI technology, in particular paid or subscription-based tools, we consider:
- Data privacy and retention policies and whether user content is used for training
- Environmental impact
- Transparency and openness
- Alignment with our ethical and political values, while acknowledging that at the current stage of development of AI technology, fully ethical tools do not exist
We avoid tools that train on user data without consent or lack clear safeguards against misuse. When and where feasible we choose to use open source or open-weight models.
