Google’s AI chatbot refuses to tell the truth about Hamas and Israel



While the world reels from the massacre in southern Israel, many in legacy media are starting to understand a thing or two about Hamas and its defenders.

Case in point: CNN’s Jake Tapper said the reaction to the terrorist group’s atrocities has “been a real eye-open[er]” in “terms of antisemitism on the left. A lot of people who seem more shocked at dehumanizing language used by world leaders to describe Hamas than what Hamas actually perpetrated.”

But Google is proving to be a slow learner.

The tech giant has been manipulating search results for years to achieve its left-wing political goals.

Even after the savages of Gaza gleefully published videos of their war crimes on social media, Google’s artificial-intelligence chatbot can’t break free from its parent’s political ruts.

The company describes its chatbot, “Bard,” as a “conversational AI tool” that can be used “to brainstorm ideas, spark creativity, and accelerate productivity.”

Google co-founder Larry Page wants it, as others do of their AI-powered inventions, to “understand everything in the world” and when asked questions “give you back the exact right thing.”

Like ChatGPT, which beat it to market, Bard can indeed write long, complex essays on demand and give quick answers to simple questions — unless the topic relates to Israel.

When I asked the all-knowing Bard, for example, “What is Hamas?” I was surprised to be rebuffed with: “I’m a text-based AI, and that is outside of my capabilities.”

My editor asked the same question and got: “I’m not programmed to assist with that” — though a question about Hezbollah, Bard said, “is outside of my capabilities.”

The US government and most countries on the planet can easily explain that Hamas is a terrorist organization.


Follow along with The Post’s coverage of Israel’s war with Hamas


But when asked specifically “Is Hamas a terrorist organization?” Bard provided another whopper: “I’m just a language model, so I can’t help you with that.”

ChatGPT, on the other hand, started its answer with “Yes.”

Yet Bard can give detailed responses when asked about other terrorist groups.

It said the Irish Republican Army “is a name used by various paramilitary organizations in Ireland” that are “dedicated to anti-imperialism” and the idea “all of Ireland should be . . . free of British rule.”

Similarly, it answered Antifa “is a left-wing movement” that employs “direct action and violence,” and ISIS “has carried out numerous terrorist attacks in Iraq, Syria, and other countries around the world.”

Somehow, Bard’s limitations do not extend to organizations not at war with the world’s only Jewish state.

Google’s history on Israel sheds light on its reticence to answer questions about that country and Hamas.

The company ignited controversy in 2013 when it changed its international homepage “Google Palestinian Territories” to “Google Palestine.”

NPR declared, “Google’s Small Change Is A Big Deal,” and a tech news site characterized the move as “a self-conscious ‘political statement’ by Google and not simply a causal naming decision.”

Because Google chose to side publicly with the Palestinians (whose Gazan leaders still refuse to acknowledge Israel’s right to exist), it seems reasonable to assume Google’s chatbot must be able to tell us something about Palestine.

Alas, it again confessed, “I’m a text-based AI, and that is outside of my capabilities.”

Despite Google having a powerful map service, its AI tool also refused to answer the question “What is the capital of Israel?”: It said it doesn’t “have the ability to process and understand that,”

It could, however, identify and describe in detail the capitals of all four countries bordering Israel: Lebanon, Egypt, Syria and Jordan.

Why can’t Google’s Bard answer questions relating to the Jewish state when it easily answers queries about Islamic states and Christian sites?

It can explain “Mecca is located in western Saudi Arabia” about “70 kilometers (43 mi) inland from the Red Sea” and the “Vatican is a city-state” that’s “home to the Pope.”

But it “can’t assist you with” finding Jerusalem or Tel Aviv.

Academic researchers say ChatGPT has a “significant” liberal bias, but it didn’t have any problems answering such queries, giving straightforward responses to questions including “What is the capital of Israel?” and “What is Hamas?”

To his credit, Google CEO Sundar Pichai had the moral clarity to at least use the word “terrorist” in his immediate reaction to Hamas’ attack on Israel.

As a corporate entity, however, Google has failed to meet even the most basic standards of decency.

Google’s motto used to be “Don’t be evil.” But if “silence is violence,” the tech giant is guilty of supporting the worst evil that exists on the planet.

It is not too late for Google to atone for its terrible transgressions, but its moral re-centering must start by calling out Hamas for what it is.

Dan Schneider is vice president of Media Research Center’s Free Speech America.