Last updated Nov 29, 2025

In conversation with Sheryl Sandberg, plus open-source AI gene editing explained

Fri, 03 May 2024 18:59:00 +0000
Back to episodes
Following this episode, the 'fake Chamath' Twitter handle will see a significant increase in popularity and activity in the near future.
My guess is that that that Twitter handle is about to get popular again.
Explanation

The “fake Chamath” handle referred to in the episode is almost certainly @ChamathWarriors – “Chamath Palihapitiya’s burner (parody)” on X/Twitter. The episode transcript matches this: Friedberg jokes about Dave Sacks “being fake Chamath” and says, “My guess is that Twitter handle is about to get popular again.”(podscripts.co)

What we can see empirically:

  • The @ChamathWarriors account has been active and widely-cited well before the May 3, 2024 episode. Its memes were quoted in media pieces and blogs in 2022–2023 (e.g., on ChatGPT and Pixelmon) and in political/tech commentary, showing it already had notable reach.(moguldom.com)
  • As of a late‑2025 snapshot, @ChamathWarriors shows ~7,000 tweets and ~16,000 followers, indicating a moderately popular, consistently active parody account, not an abandoned or tiny one.(twstalker.com)
  • After the May 2024 episode, the account continues to be referenced in third‑party content: for example, an investing newsletter highlights its memes in October 2024 (“Perfect last‑minute costume idea”) and again in February 2025 (“Dave said go all‑in”), suggesting it’s still part of the broader finance/tech meme ecosystem and now explicitly tied into All‑In in at least one meme.(becomeabetterinvestor.net)

However, the core of the prediction is about a “significant increase in popularity and activity” after the episode. To judge that, we’d need things like:

  • follower counts or engagement metrics before May 3, 2024 vs. after;
  • or reporting/commentary that clearly notes a resurgence (“this account suddenly blew up after being outed on the All‑In podcast”).

Publicly accessible sources don’t provide historical follower data for @ChamathWarriors, and no coverage explicitly documents a distinct popularity spike or revival around that time. All the evidence shows is steady, ongoing popularity from 2022 through 2025, with mentions both before and after the episode, not a clearly identifiable “re‑becoming popular” moment.

Because:

  • the account was already popular enough to be quoted in mainstream and niche outlets before the prediction; and
  • we lack quantitative or clear qualitative evidence of a step‑change in popularity or activity afterwards;

it’s not possible to definitively say whether Friedberg’s “about to get popular again” call was correct or incorrect. The account remained active and visible, but whether there was a significant renewed surge is not verifiable from available data.

So the fairest classification is ambiguous: enough time has passed, but the necessary metrics to confirm or refute a “significant increase” don’t appear to be publicly documented.

In the near future, the 'fake Chamath' Twitter account will become significantly more popular than it was prior to this episode.
It's gonna get pretty popular.
Explanation

Jason is referring to the long‑dormant parody X (Twitter) handle @Fakechamath and says, right after Sheryl Sandberg reveals Dave Goldberg ran it: “My guess is that Twitter handle is about to get popular again. It’s going to get pretty popular.”(podscripts.co)

The episode’s show notes confirm that the handle in question is indeed twitter.com/fakechamath.(podcasts.apple.com)

Third‑party X viewers such as TwStalker show @Fakechamath having roughly 3,000 followers and 60 following, with multiple independent profile snapshots in 2025 all reporting about the same follower count.(ww.twstalker.com) However, these tools only expose the current count at the time they crawl the profile; they do not provide a time series or a pre‑May‑2024 snapshot. X’s own interface is behind a login/JS wall and likewise doesn’t expose historical follower numbers.

Because there is no accessible data on how many followers or what level of engagement @Fakechamath had before the May 3, 2024 episode, we cannot rigorously determine whether it became “significantly more popular” afterward. The current modest size (~3K followers) also doesn’t clearly prove or disprove a large relative increase from its prior baseline.

With sufficient time elapsed but no reliable before‑and‑after metrics for popularity, the outcome of this prediction cannot be determined from available evidence, so it is best classified as ambiguous.

conflict
Information about the Israel–Hamas/Gaza war (e.g., casualty numbers, authenticity of videos, and on-the-ground facts) will remain highly unclear and contested for an extended period after this May 2024 conversation, with no quick resolution to the factual disputes in the near term.
The fog of war is going to be thick for a while here, folks, and it's going to take us a while.
Explanation

As of late November 2025, more than a year and a half after the October 2023 Hamas attack and over 18 months after the May 2024 podcast, core factual questions about the Gaza war remain highly contested, supporting Jason’s prediction that the “fog of war” would persist.

  1. Casualty numbers remain disputed and only partially verifiable. Gaza’s Health Ministry now reports over 70,000 Palestinians killed, figures the UN continues to treat as broadly credible, yet Israel publicly questions their accuracy and still does not provide its own comprehensive alternative count. (reuters.com) Peer‑reviewed analyses in The Lancet and related work suggest the ministry’s tallies may significantly undercount deaths, producing higher statistical estimates and thereby introducing another set of divergent figures. (theguardian.com) An Associated Press analysis of Health Ministry data highlighted shifting reported proportions of women and children among the dead and raised questions about data reliability under conditions of system collapse. (apnews.com)

  2. International bodies and states openly contest interpretations of the same data. A May 2024 controversy over a UN/OCHA table of identified victims was seized on by pro‑Israel commentators as proof earlier UN‑cited totals were exaggerated, forcing the UN and WHO to clarify that overall death estimates had not been revised down and that they still could not independently verify the full toll because of access and capacity limits. (theguardian.com) This illustrates that, even well into 2024 and beyond, basic casualty breakdowns and their meaning remain politically and factually disputed rather than “settled.”

  3. Authenticity of videos and on‑the‑ground facts are still heavily contested. Media‑forensics work and monitoring groups document hundreds of recurring myths, doctored or AI‑generated visuals, and miscaptioned war footage spreading on major platforms on both sides of the conflict into 2025, underscoring a long‑running information war with no clear resolution. (newsguardtech.com) Key incidents—such as responsibility for the Al‑Ahli hospital blast—have remained subjects of competing narratives and enduring uncertainty in formal media reviews long after the events themselves. (abc.net.au)

Because nearly all the domains Jason mentioned—casualty counts, civilian/combatant ratios, responsibility for specific attacks, and authenticity of visual evidence—remain murky, disputed, or only partially reconstructable well over a year after his May 2024 comment, the prediction that “the fog of war is going to be thick for a while” has clearly borne out.

Over the coming years, advances and wider use of AI-generated content will increase the prevalence and believability of misinformation and disinformation, making it progressively more difficult for the public to determine what is true from media sources.
It's going to get worse with AI. It's going to be harder and harder to find the truth.
Explanation

Evidence since the podcast’s May 2024 release strongly supports Jason’s prediction that AI will make misinformation more prevalent and convincing, and thus make it harder for the public to discern truth from media sources.

  1. Rising use of generative AI in misinformation & disinformation
  • A July 2024 report by the Center for Countering Digital Hate documented how generative AI tools are being used to produce election-related disinformation, including fake images and text, at scale, with minimal friction for would-be propagandists.
  • The EU’s disinformation monitoring initiatives have repeatedly warned in 2024–2025 that generative AI is lowering the cost and increasing the volume of misleading political content, especially ahead of major elections.
  1. Increased believability and realism (deepfakes, synthetic media)
  • Multiple 2024–2025 news and research reports describe rapidly improving deepfakes and AI-generated audio/video being used to impersonate public figures and spread false narratives, including fabricated speeches and statements. These are often realistic enough to fool ordinary viewers without careful verification, aligning with the idea that it’s becoming “harder and harder to find the truth.”
  1. Growing difficulty for the public to distinguish real from fake
  • Surveys and research in 2024–2025 show declining trust in online information and increasing concern that AI-generated content makes it harder to know what to believe, especially on social platforms and messaging apps. This is consistent with Jason’s claim that AI would worsen the signal‑to‑noise problem in media.
  1. Timeframe: already observable within “the coming years”
  • Jason’s prediction was made in early May 2024 and framed as a trend over “the coming years.” By late 2025, we already see:
    • Clear, documented increases in AI‑generated misinformation volume;
    • Growing realism and accessibility of tools to fabricate convincing fake media;
    • Evidence that audiences find it increasingly difficult to distinguish authentic from synthetic content.

Given this combination of quantitative reports and qualitative indicators, the direction and substance of Jason’s prediction have materialized by late 2025. While the trend is ongoing and may worsen further, the claim that AI is making it harder to find the truth in media is already borne out by current evidence, so the prediction is best classified as right rather than inconclusive or ambiguous.