Recognizing the rise of artificial intelligence, Rappler is taking an open yet prudent approach to the adoption of AI tools, while staying true to our mission of delivering cutting-edge journalism that spurs communities to action.
Our exploration of AI tools is grounded on truth, independence, and integrity – the very same principles that are at the core of what we do. There is no room for systems that may undermine our credibility or spread misinformation and disinformation.
We will fully disclose to our readers and viewers when content has been produced by an AI tool. AI-generated content will be clearly labeled as such.
Cognizant of AI’s inherent benefits and risks, we seek to use and develop systems and tools that promote social justice, fairness, and non-discrimination. Paramount in this process are sourcing data ethically, protecting data privacy, upholding intellectual property rights, and guarding against biases.
All AI tools, for any purpose, must first be tested by a select group of employees and approved by senior management before they are authorized for company-wide usage. An inter-unit team serves as the initial gatekeeper for proposing, discussing, vetting, and experimenting with AI tools. Suggested tools will be prioritized based on quantified value-add as well as ease of use and implementation.
Even as we explore ways to tap AI, we put a premium on the supremacy of human critical thinking and judgment. AI tools will neither replace employees nor serve as decision makers. Rather, these tools will primarily be used to enhance productivity and efficiency in the newsroom and in the field, and to create products that better serve the needs of our audience and partners. Automating repetitive tasks, for instance, would enable us to focus on more high-impact work.
AI output should never be a substitute for newsgathering, which includes research, interviews, and corroboration. It will not eliminate writing, which requires the ability to distinguish fact from fiction, a nuanced understanding of issues, and an awareness of cultural context. We will continue to hone our newsgathering and writing skills, maintaining the highest standards, even as AI is introduced into the process to boost efficiency.
All text, video, and audio that may be produced using AI, whether in part or in whole, require thorough human review and approval. Content generated by AI must undergo editorial scrutiny before being published, posted, or publicly released. We must be able to make sense of AI output and be capable of explaining the end result. We are responsible for ensuring the quality of our content, particularly their accuracy, fairness, completeness, and adherence to established Rappler style.
We may use generative AI for tasks such as summarizing, transcribing, data sorting, grammar and style checking, and translating, but still with human oversight.
AI-generated images are not intended to replace the work of Rappler’s photojournalists and artists. Given the ethical and legal issues surrounding the use of images from AI, including potential copyright infringement, we will judiciously deploy AI tools in our visual executions, vetted by senior editors and guided by our editorial policies.
We pledge transparency in our adoption of AI tools, keeping our audience informed about the way we use innovative methods. Our guidelines are a work in progress as technology rapidly evolves and as the media industry adapts to technological advancements. Changes to our guidelines will be reflected on this page.
To ensure compliance with these guidelines, there will be close monitoring and regular review of AI usage in the company. Any violation will be dealt with accordingly.
We also welcome feedback and insights from our audience. You may email us at feedback@rappler.com. – Rappler.com