AI Under Scrutiny: HHS Uses Palantir for DEI Screening, Epstein Files Released, and AI Truth Concerns Rise
The Department of Health and Human Services (HHS) used artificial intelligence tools from Palantir to screen grants and job descriptions for compliance with former President Donald Trump's executive orders targeting diversity, equity, and inclusion (DEI) initiatives and gender ideology, according to a recently published inventory of HHS's AI use in 2025. This revelation comes as the Department of Justice released approximately 3.5 million pages of files related to convicted sex offender Jeffrey Epstein, revealing connections to prominent figures in the tech industry, and as concerns grow about the potential for AI to erode societal trust.
According to Wired, the HHS used Palantir's AI tools since March 2025 to audit grants, grant applications, and job descriptions. Neither Palantir nor HHS publicly announced this use of the company's software. During Trump's second term, Palantir received over $35 million in payments and obligations from HHS, though descriptions of these transactions did not mention the DEI or gender ideology targeting.
The release of the Epstein files, mandated by the Epstein Files Transparency Act of November 19, 2025, has brought renewed scrutiny to the relationships between Epstein and individuals in the tech world. While some, like Microsoft co-founder Bill Gates, have long been associated with Epstein, others, such as Elon Musk, had less established connections prior to the release, Wired reported. It is important to note that the presence of a name in the Epstein files does not necessarily imply wrongdoing.
Meanwhile, concerns are mounting about the potential for AI to contribute to a "truth crisis." MIT Technology Review reported that the US Department of Homeland Security is using AI video generators from Google and Adobe to create content for public consumption. This news has raised concerns about the potential for AI-generated content to deceive the public, shape beliefs, and erode societal trust. The article also noted that tools initially intended to combat this crisis are proving inadequate.
In the enterprise AI space, VentureBeat reports that companies are increasingly adopting Retrieval-Augmented Generation (RAG) to ground Large Language Models (LLMs) in proprietary data. However, many organizations are finding that retrieval has become a foundational system dependency, rather than a feature bolted onto model inference. Failures in retrieval can undermine trust, compliance, and operational reliability. "Stale context, ungoverned access paths and poorly evaluated retrieval pipelines do not merely degrade answer quality; they undermine trust, compliance and operational reliability," VentureBeat noted.
Asana CPO Arnab Bose emphasized the importance of shared memory and context for successful AI agents within an enterprise. He stated at a recent VB event in San Francisco that providing AI agents with detailed history and direct access, along with guardrail checkpoints and human oversight, allows them to function as active teammates. "This way, when you assign a task, you're not having to go ahead and re-provide all of the context about how your business works," Bose said. Asana launched Asana AI Teammates last year with the aim of creating a collaborative system where AI agents are directly integrated into teams and projects.
Discussion
AI Experts & Community
Be the first to comment