Connect with us

Sponsored Plus

NEW: OpenAI Leaders Exposed For Bias, Calling Senior Trump Officials “Total Sycophants”

Published

on

Comments from senior executives at OpenAI are drawing criticism after a top company leader publicly referred to allies of President Donald Trump as “total sycophants,” adding fuel to an ongoing debate about political bias inside major artificial intelligence firms.

Last year, OpenAI Head of Global Business James Dyett made the remark in response to a post from Trump’s artificial intelligence and cryptocurrency adviser David Sacks. Sacks had criticized critics of the Trump administration who warned about the administration’s technology and economic policies, arguing that some commentators were rooting for the president to fail.

Dyett responded directly to the post by describing Sacks and other Trump supporters as “total sycophants.”

On top of this, several prominent OpenAI officials have previously held positions in Democratic administrations or political campaigns.

OpenAI’s vice president of global affairs, Chris Lehane, served in the Clinton White House and later worked on the presidential campaign of former Vice President Al Gore. During the Clinton years, Lehane was often described as an aggressive political strategist, once labeled a “slash-and-burn White House operative” and a “pugnacious political operative during the scandal years.”

James Dyett - Facebook, LinkedIn, Twitter

James Dyett, Global Business at OpenAI

OpenAI general counsel Scott Schools previously worked at the Department of Justice under several administrations, including the Obama administration, and returned in October 2016 to serve as the top deputy to former Deputy Attorney General Rod Rosenstein. Dyett himself also worked in the National Economic Council under Barack Obama.

Meanwhile, OpenAI’s chief economist, Aaron Chatterji, was the Democratic nominee for North Carolina state treasurer in 2020. His campaign was endorsed by Obama, Joe Biden, and Kamala Harris. Chatterji also previously served in both the Biden and Obama administrations as an economic adviser.

The debate over bias in artificial intelligence has also been examined in several academic studies. A 2023 research paper by scholars from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University evaluated fourteen major AI systems and concluded that ChatGPT and GPT-4 were “the most left-wing libertarian” models in the group.

Another 2023 study by researchers at the University of East Anglia found what it described as “robust evidence” that ChatGPT displayed “a significant and systematic political bias toward the Democrats in the U.S.”

OpenAI has previously said it does not intentionally program political preferences into its models and that responses can reflect patterns found in training data drawn from across the internet.

OpenAI CEO Sam Altman attends the artificial intelligence(AI) Revolution Forum in Taipei on September 25, 2023.

The controversy comes as OpenAI becomes increasingly involved in U.S. national security infrastructure. The company recently signed a deal with the Pentagon to deploy AI systems on classified government networks after a dispute between the Defense Department and rival AI firm Anthropic over restrictions on military uses of artificial intelligence.

Anthropic had refused to allow its models to be used for domestic surveillance or fully autonomous weapons, prompting the Trump administration to cut the company out of federal contracts and label it a potential supply-chain risk. OpenAI then moved quickly to strike its own deal with the Pentagon to provide AI systems for government networks.

The agreement sparked backlash from some researchers and OpenAI employees, who warned the technology could be used for surveillance or military targeting. OpenAI CEO Sam Altman later acknowledged the rollout appeared “opportunistic and sloppy.”

The company later revised parts of the agreement with the Defense Department, reportedly adding restrictions against surveillance of U.S. citizens and limiting certain intelligence-agency uses.

Altman addressed the issue during a company-wide meeting Tuesday, telling staff that the U.S. government ultimately decides how the military uses the company’s artificial intelligence tools and that OpenAI employees do not have a say in those decisions. According to a partial transcript of the meeting reviewed by CNBC, Altman said the Pentagon made it clear the company does not control operational decisions involving the Department of Defense’s use of its technology.

“So maybe you think the Iran strike was good and the Venezuela invasion was bad,” Altman said Tuesday. “You don’t get to weigh in on that.”

OpenAI leadership has also faced attention over Altman’s personal views on psychedelic drugs. Altman has spoken publicly about his experiences with psychedelic substances, describing some as “transformative” or “life-changing.” Reports have also noted his financial backing of a startup pursuing FDA approval for therapies involving MDMA derivatives.

Meanwhile, Anthropic CEO Dario Amodei has returned to negotiations with the U.S. Department of Defense after talks broke down Friday over how the military would be allowed to use the company’s AI tools. The discussions collapsed after President Trump directed federal agencies to stop using Anthropic’s technology.

A new agreement would allow the U.S. military to continue using Anthropic’s AI systems, which reports say have already been used in Washington’s war with Iran.