In connection with the U.S. election on November 5, ChatGPT blocked approximately 250,000 deepfake content requests, reports Engadget. OpenAI, ChatGPT’s parent company, disclosed that the requests targeted the creation of fake images of political figures, including President Biden, Donald Trump, Vice President Kamala Harris, Senator J.D. Vance, and Governor Walz.
To combat misinformation, OpenAI had preemptively strengthened its security protocols, rejecting requests to create images of individuals, particularly political leaders. The initiative aimed to prevent false information from influencing the election.
OpenAI also confirmed that ChatGPT maintained neutrality, refraining from political bias or candidate endorsements and withholding election outcome predictions in its responses.