Anita Srinivasan, LL.M. Candidate, Class of 2026
Artificial intelligence assistants are becoming the new gateways to online information. Products such as Google’s Gemini, Microsoft’s Copilot, and Apple’s integration of ChatGPT into Siri allow users to ask questions directly and receive synthesized answers. These assistants are being built into phones, browsers, and operating systems, often triggered by a button press or wake word, and increasingly appear inside search pages as “AI Overviews.” This blog post argues that, without additional safeguards at the answer layer, neutrality principles that traditionally applied to search will be diluted. To avoid that outcome, regulators should focus on “assistant neutrality” – a narrow set of obligations that preserve transparency and user choice without making prescriptive recommendations regarding URL rankings or page design.
The recent ruling in United States v. Google (August 2025) imposed remedies for Google’s abuse of monopoly power in general search. The court restricted Google from entering into exclusive default contracts covering Google Search, Chrome, Google Assistant, and the Gemini app, and it mandated certain data-sharing and syndication arrangements to give rivals a fairer opportunity to compete. However, the court did not, address how assistants present information, how answers are grounded, or how visible third-party content remains within this interface layer.
Assistant use often still begins with the search engine, but the first interaction has moved up the page. When an AI-generated answer renders before links, it shapes what most people see and whether they continue to the underlying sources. Further, independent studies and measurements have shown that increased use of such “AI Overviews” correlates to reduced downstream clicks. In parallel, AI assistants are increasingly invoked outside the search page: a side button, long-press, or wake word selects a service before any comparison can occur. Together, these shifts relocate the gate from the ranking of URLs to two earlier questions: who answers first, and how the answer is constructed and attributed.
Two levers therefore determine outcomes at the answer layer:
- Invocation (who answers first): Device/OS/browser triggers determine which assistant is called. If one provider effectively controls that trigger, rivals may struggle to be discovered and adopted.
- Presentation and source visibility (how the answer is built): The answer synthesis layer can omit or bury sources, privilege first-party properties, and depress downstream clicks. Even where an answer is accurate, the lack of clear attribution can reduce the visibility of third-party publishers and limit user evaluation.
While many have articulated antitrust law and policy concerns with respect to AI assistants in particular, and generative AI in general, this analysis is based on consumer protection. Section 5 of the FTC Act reaches opaque or coercive methods of competition that tend to reduce choice or impair rivals at an early stage, particularly where less-restrictive alternatives are available.
In light of the evolution of the FTC’s view under Section 5, it may be argued that the AI assistant answer layer is a natural fit for targeted transparency and choice obligations. In 2013, following a multi-year investigation into alleged search bias, the FTC closed its case against Google without finding a Section 5 violation in relation to how results were ranked or presented. The Commission accepted limited commitments (e.g., around scraping and certain ad-tool restrictions) but declined to impose any general neutrality duty on search presentation. However, in 2022, the FTC adopted a policy statement clarifying that Section 5 covers conduct beyond the Sherman and Clayton Acts, including coercive, deceptive, or incipient practices that tend to harm competitive conditions and can be addressed without a full rule-of-reason inquiry, especially where less-restrictive remedies exist.
Viewed through that lens, two practices at the answer layer warrant attention: (i) black-box answers without clear attribution of sources, and (ii) sticky/default invocation methods that are difficult to change. Both affect users and competitors before conventional competition occurs, and both are remediable with targeted, low-burden obligations such as:
- Non-exclusive invocation methods which can be easily switched: Contractual exclusivity at the invocation layer should be disallowed, so that device-level triggers are not bound to a single assistant. A straightforward setting/control that allows users to remap invocation quickly would address this while preserving convenience. While original equipment manufacturers may still ship a default, users should be able to choose alternatives without friction. This approach reduces the tendency toward coercive lock-in while avoiding prescriptive setup flows.
- Transparent attribution and source visibility: When an assistant synthesizes an answer, it should disclose what it relied on, including links to specific sources where feasible. Basic attribution is the least intrusive remedy – it supports user assessment, maintains publisher discoverability, while deterring undisclosed self-preferencing.
- FRAND-like access to core inputs (i.e., index and key datasets): Assistant quality depends on access to web-scale indexes and high-signal licensed datasets. While the August 2025 decree opens a door to index/data sharing and syndication, a fair, reasonable, and non-discriminatory (FRAND-like) standard would make that door explicitly available to assistant providers on commercial terms, with appropriate rate-limit and security safeguards. This approach addresses input foreclosure concerns while preserving investment incentives without requiring disclosure of model weights.
- Independent oversight with key public metrics: Compliance monitoring can be extended to the answer layer, and a short, outcome-oriented dashboard can be published indicating: (i) the share of OEM/OS/browser arrangements that are non-exclusive at invocation; (ii) the number of index/data access grants to qualified assistants; and (iii) the rate at which answers include attributed sources.
This proposal is the least restrictive alternative as it neither requires re-ranking results nor prescribing interface layouts. Similarly, it does not mandate confusing “choice screens”, and does not compel free data access, because FRAND-standard input access is commercial, safeguarded, and limited to what is necessary for viable competition.
While the United States v. Google decision resolved major distribution issues at the search layer, the governance of the neutrality at the answer layer of AI assistants remains unaddressed. A narrow assistant-neutrality standard that covers non-exclusive invocation with an easy switch, transparent attribution of sources, data input access, and a limited set of public metrics, offers a baseline for contestability that aligns with the FTC’s renewed orientation of Section 5 around transparency and consumer choice. These obligations enable competition inside the AI assistant while avoiding prescriptions about ranking or design, and they help maintain a neutral experience for users, publishers, and future entrants.