Alexa Media Services - Alexa News Nigeria (Alexa.ng)

We integrate leading technology and transform your business into a cognitive enterprise. Integrated communications with better results.

Search Suggest

A Pivotal Month for United States Tech Policy: Donald Trump's First 100 Days Usher in AI Reforms, Antitrust Momentum, and Landmark Online Safety Legislation


As the Trump Administration marked the completion of its inaugural 100 days in office on April 29, 2025, the landscape of technology policy in the United States entered a phase of rapid transformation and fervent legislative activity. What began as a campaign promise to dismantle bureaucratic hurdles and reclaim American dominance in emerging technologies materialized into concrete actions that reshaped federal approaches to artificial intelligence, antitrust enforcement, and digital content moderation. April 2025 stood out as an extraordinarily dynamic period, with executive directives, judicial proceedings, and congressional maneuvers converging to address longstanding concerns about innovation, competition, and public safety in the digital realm. This month not only highlighted the administration's commitment to a "pro-innovation, pro-competition" ethos but also revealed tensions between accelerating technological adoption and safeguarding civil liberties, consumer rights, and market fairness.

First Lady Melania Trump and Sen. Ted Cruz (R-TX) participate in the March 2025 "Be the Best Roundtable on A.I. and Deepfakes," including discussing the TAKE IT DOWN ACT, which passed Congress this month. 


At the forefront of these developments were two pivotal memoranda issued by the White House Office of Management and Budget (OMB) on April 3, 2025—M-25-21, titled "Accelerating Federal Use of AI through Innovation, Governance, and Public Trust," and M-25-22, "Driving Efficient Acquisition of Artificial Intelligence in Government." These documents rescinded and replaced analogous guidance from the Biden era, representing the Trump Administration's first substantive overhaul of federal AI policy. Stemming from President Trump's January 23, 2025, Executive Order 14179—"Removing Barriers to American Leadership in Artificial Intelligence"—the memos aimed to foster a more agile federal bureaucracy capable of leveraging AI to enhance government efficiency while prioritizing domestic innovation.

To understand the significance of these memos, it is worth noting their place in the broader evolution of U.S. AI governance. During his first term, President Trump issued an executive order in 2020 emphasizing the promotion of "trustworthy AI" in federal operations, laying the groundwork for risk-aware deployment without stifling progress. The Biden Administration built upon this with a 2023 executive order introducing stringent risk management frameworks, mandatory inventories of AI use cases, and the establishment of Chief AI Officers (CAIOs) in numerous federal agencies by mid-2024. However, the Trump Administration viewed these measures as overly prescriptive, potentially hampering the United States' competitive edge against global rivals like China. The new directives shifted toward a "forward-leaning" strategy to sustain and enhance America's global AI dominance by reducing regulatory burdens and emphasizing investments in "American-Made AI" technologies.

M-25-21, the flagship memo on AI use, retained core structural elements from its predecessor, such as the requirement for agencies to appoint CAIOs and convene a Chief AI Officer Council under OMB leadership. It also preserved governance boards at the agency level and the mandate for comprehensive inventories of AI applications. These continuities ensured institutional stability, allowing federal entities—from the Department of Defense to the Social Security Administration—to build on existing inventories rather than starting anew. Yet, the memo introduced impactful changes by softening risk management protocols. The definition of "high-impact AI systems"—those warranting enhanced oversight—was narrowed to focus primarily on applications involving national security, critical infrastructure, or direct public harm, excluding many lower-stakes uses like routine data analysis in administrative tasks. This recalibration aimed to expedite AI deployment in areas such as predictive analytics for disaster response or automated permitting processes, potentially saving taxpayer dollars and accelerating service delivery.

The memo allowed for waivers from risk management requirements more readily than before, providing agencies with flexibility to innovate without exhaustive compliance checklists. It included a non-exhaustive list of presumed high-impact uses, such as AI in law enforcement surveillance or benefits eligibility determinations, but omitted several earlier categories like certain environmental modeling tools, signaling a deliberate de-emphasis on expansive regulatory scopes. The memo also reinforced safeguards for privacy, civil rights, and civil liberties, directing agencies to align AI practices with existing frameworks on information security. Experts noted that while the memo promotes "human flourishing" through AI, it places greater responsibility on agencies to self-assess risks, potentially fostering accountability through performance tracking rather than top-down mandates.

M-25-22 focused on procurement, urging agencies to prioritize "best-in-class" AI solutions through competitive bidding and streamlined acquisition processes. Effective immediately, the memo set ambitious timelines: within 200 days, the General Services Administration (GSA) and OMB were tasked with launching a web-based repository of AI procurement tools, including model contracts and evaluation criteria. A key innovation was the emphasis on maximizing the use of U.S.-produced AI products and services, aligning with broader administration goals to bolster domestic supply chains amid geopolitical tensions. Agencies were directed to revisit data ownership and intellectual property protocols in AI contracts, ensuring federal investments yield reusable assets without ceding control to foreign entities. This procurement shift could inject billions into American firms, from startups developing edge AI for defense to enterprises scaling cloud-based models for healthcare.

The release of these memos sparked debate. Advocacy groups expressed concerns that softened risk mandates could exacerbate biases in AI systems, particularly in high-stakes domains like hiring algorithms or predictive policing. Conversely, industry leaders hailed the changes as a necessary step to prevent the U.S. from lagging in the global AI race. Analysts observed that the memos strike a balance by building on earlier frameworks while reducing bureaucratic overreach, positioning federal AI adoption as a tool for economic competitiveness rather than regulatory caution. By April's end, agencies had begun submitting updated AI inventories, with early reports indicating a surge in pilot programs for American-made tools.

Parallel to these executive actions, April 2025 witnessed accelerated momentum in antitrust enforcement against Big Tech, underscoring the administration's bipartisan resolve to curb monopolistic practices. The Department of Justice (DOJ) advanced into the remedies phase of its landmark case against Google, following a 2024 court ruling that the company had unlawfully maintained a monopoly in general online search. The 15-day remedies trial, commencing in early April, featured intense debates over structural reforms to dismantle Google's dominance, which controls over 90% of U.S. search queries.

The DOJ sought aggressive interventions, including prohibitions on exclusive distribution deals for Google Search, Chrome, Google Assistant, and its AI tools across devices. Prosecutors argued that Google's multibillion-dollar payments to partners like Apple created insurmountable barriers for rivals such as Bing or DuckDuckGo. They also highlighted emerging threats from AI, warning that Google's vast search index could entrench its position in generative tools, potentially stifling innovation in the nascent AI search sector. Witnesses testified to the chilling effect of Google's market power, deterring investment in alternative search startups.

Google countered with proposals for behavioral remedies, such as sharing anonymized search data with competitors and establishing a technical compliance committee, while opposing divestitures like selling Chrome. The company's leadership emphasized how AI advancements had democratized information access, rendering monopoly claims outdated. The trial's proceedings drew parallels to historic antitrust cases, with the outcome setting the stage for a September 2025 ruling that would prohibit exclusive contracts but spare Chrome, acknowledging AI's disruptive influence. This outcome reflected the view that the emergence of generative AI has reshaped the competitive landscape, averting more drastic measures.

Simultaneously, the Federal Trade Commission (FTC) launched its trial against Meta Platforms on April 14, 2025, aiming to unwind the company's 2012 acquisition of Instagram and 2014 purchase of WhatsApp. The case alleged a strategy that neutralized nascent threats to Facebook's social networking monopoly. The FTC pressed forward, arguing that Meta's moves eliminated upstart competitors and harmed consumers through reduced innovation and higher advertising costs.

The two-month trial featured testimony from Meta's CEO, who acknowledged that acquiring Instagram was partly to neutralize a potential competitor. Prosecutors presented evidence of internal concerns that WhatsApp's encrypted messaging could evolve into a full social network, justifying the acquisition's price tag. The FTC defined the market narrowly as personal social networking services, pitting Meta against limited rivals, and sought divestitures to restore competition. Meta's defense highlighted post-acquisition investments that benefited consumers, while pointing to TikTok's dominance as proof of a vibrant market. A TikTok outage in early 2025 reportedly spiked Instagram traffic, bolstering claims of direct rivalry. The proceedings tested antitrust law's adaptability to digital markets, with significant implications for future tech consolidations.

Amid these high-profile battles, Congress delivered a bipartisan triumph with the passage of the TAKE IT DOWN Act (S.146/H.R.633) on April 28, 2025. Introduced by Senators Ted Cruz (R-TX) and Amy Klobuchar (D-MN), and its House counterpart by Representatives Maria Salazar (R-FL) and Abigail Spanberger (D-VA), the bill passed the Senate unanimously in February before clearing the House by a 409-2 vote. President Trump signed it into law on May 19, 2025, in a ceremony attended by survivors and advocates.

The Act criminalizes the nonconsensual publication of intimate visual depictions, including AI-generated deepfakes and digital forgeries, targeting both authentic and synthetic images intended to harass or exploit. Platforms with over 1 million users must implement rapid-takedown mechanisms, removing offending content within 48 hours of victim notification. Penalties include hefty fines and potential imprisonment, with enhanced protections for minors. The law addresses the surge in nonconsensual intimate images, which affected a significant portion of Americans in 2024, disproportionately impacting women and marginalized groups.

Victim advocates praised the Act as a lifeline, empowering survivors to reclaim control over their digital likenesses amid the rise of accessible AI tools. The legislation's swift passage followed congressional hearings featuring survivor testimonies, underscoring alarm over AI-enabled exploitation. However, civil liberties groups raised concerns about unintended consequences, warning that vague definitions and mandatory takedowns could enable over-censorship, chilling free speech and eroding privacy. Platforms might remove content preemptively to avoid liability, potentially affecting marginalized voices. The Act's balance between victim rights and free expression will hinge on forthcoming regulatory guidance, potentially setting global content moderation standards.

April 2025 saw additional tech-related bills advancing through Congress, reflecting a push to modernize digital infrastructure and consumer protections. The House passed four measures: one mandating clear labeling for devices with cameras or microphones to alert users to surveillance risks; another bolstering domestic semiconductor production amid U.S.-China tensions; a third allocating funds for next-generation broadband in rural areas; and a fourth enhancing cybersecurity for satellite communications. These bills enjoyed bipartisan support and Senate companions, positioning them for swift enactment.

Six more bills progressed through committees. One required financial institutions to flag elder fraud patterns using AI-driven analytics. Another standardized encryption protocols for telecoms. A third incentivized federal use of domestic blockchain for secure transactions. In the Senate, bills focused on supply chain audits and spectrum allocation gained traction, while the House advanced a measure imposing stricter reporting on cybersecurity breaches.

Eleven new bills were introduced, signaling sustained momentum. Notable proposals included establishing training programs for AI specialists, accelerating quantum-resistant encryption adoption, and creating a commission on U.S. quantum dominance. Others targeted deepfake harms and stablecoin regulation. These introductions spanned AI ethics, blockchain integration, and supply chain resilience, with significant lobbying activity influencing outcomes.

In retrospect, April 2025 encapsulated the Trump Administration's tech policy vision: deregulatory yet strategic, aggressive on competition, and responsive to societal harms. The OMB memos streamlined AI's federal embrace, antitrust trials challenged tech titans, and the TAKE IT DOWN Act fortified digital dignity, while a cascade of bills promised enduring reforms. Success will depend on implementation—ensuring innovation thrives without eroding trust. With AI's exponential growth and geopolitical stakes rising, these developments position the U.S. at a crossroads, where policy agility could cement leadership or invite unforeseen pitfalls. As Congress adjourns its spring session, the stage is set for deeper debates on quantum security, platform accountability, and beyond, reminding us that in the digital age, governance is as innovative as the technologies it regulates.

Jokpeme Joseph Omode stands as a prominent figure in contemporary Nigerian journalism, embodying the spirit of a multifaceted storyteller who bridges history, poetry, and investigative reporting to champion social progress. As the Editor-in-Chief and CEO of Alexa News Nigeria (Alexa.ng), Omode has transformed a digital platform into a vital voice for governance, education, youth empowerment, entrepreneurship, and sustainable development in Africa. His career, marked by over a decade of experience across media, public relations, brand strategy, and content creation, reflects a relentless commitment to using journalism as a tool for accountability and societal advancement.

Post a Comment