Skip to main content
Mallory

AI Adoption and Governance Updates Across Industry and Government

ai-platform-securityprivacy-surveillance-policy
Updated April 14, 2026 at 06:02 PM10 sources
Share:
AI Adoption and Governance Updates Across Industry and Government

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Recent coverage focused on AI adoption, governance, and societal impacts rather than a discrete cybersecurity incident. OpenAI CEO Sam Altman argued that comparing AI energy use to human cognition is “unfair,” claiming the energy cost of “training a human” (years of living and food consumption plus evolutionary history) should be considered when judging AI efficiency, and separately warned that some companies are engaging in “AI washing”—attributing layoffs to AI as a pretext for workforce reductions—while also acknowledging real job displacement is likely to become more noticeable in the next few years.

Enterprises and public-sector organizations highlighted practical AI rollouts and associated risk considerations. Intel introduced Ask Intel, a support assistant built on Microsoft Copilot Studio, alongside a shift away from public phone support toward web-based case handling, while noting response accuracy “cannot be guaranteed.” Microsoft removed a blog post that had described training LLMs using a Kaggle dataset derived from pirated Harry Potter ebooks, amid ongoing legal uncertainty around fair use and potential contributory infringement exposure. Separately, U.S. federal officials emphasized targeted AI adoption and expectation management (with the VA reporting hundreds of AI use cases), while other items included a hobbyist AI dashboard project shared on GitHub and a generic startup article on AI-accelerated MVP development—neither of which provided substantive security-relevant disclosures.

Timeline

  1. Apr 6, 2026

    Altman urges U.S. to prepare for AI superintelligence risks

    OpenAI CEO Sam Altman said advanced AI is moving quickly into real economic use and urged U.S. policymakers to prepare for both its benefits and risks. He highlighted cybersecurity and biosecurity as near-term danger areas and called for close coordination among government, AI companies, and security groups to prevent major AI-enabled attacks.

  2. Feb 22, 2026

    Altman argues AI-to-human energy comparisons are unfair

    During a 60-minute Q&A hosted by The Indian Express, Sam Altman said comparing AI training and inference energy use directly with human cognition is misleading. He argued that accounting for the long process of human development changes the efficiency comparison and also called for more sustainable energy sources.

  3. Feb 22, 2026

    Intel launches 'Ask Intel' AI support assistant

    Intel rolled out 'Ask Intel,' an AI-powered customer support assistant built on Microsoft Copilot Studio and made it available on its support site. The launch accompanied Intel's broader shift toward digital-first support, including reduced phone and social-media-based support channels.

  4. Feb 20, 2026

    Microsoft removes blog on training LLMs with pirated books

    Microsoft deleted a blog post that had described training language models on pirated Harry Potter books. The removal drew attention to the company's handling of guidance related to copyrighted training data.

  5. Feb 20, 2026

    Sam Altman warns companies may use 'AI washing' to justify layoffs

    Speaking to CNBC at the India AI Impact Summit, OpenAI CEO Sam Altman said some firms may be blaming AI for layoffs they would have made anyway. He also said genuine AI-driven job disruption is likely to become more visible over the next few years.

  6. Feb 19, 2026

    Federal officials outline targeted AI adoption at public health IT summit

    At a Nextgov/FCW and ATARC public health IT summit, current and former U.S. officials said agencies should adopt AI strategically with careful implementation and realistic expectations. VA and NIH described ongoing AI efforts, including customer-experience improvements, grant analysis, and domain-specific language model exploration.

  7. Feb 19, 2026

    OpenAI leaders meet Indian PM during India AI summit week

    Sam Altman and other AI leaders met with Indian Prime Minister Narendra Modi during a week that underscored India's growing importance as an AI market. The meeting provided context for Altman's subsequent public comments on AI jobs and energy use.

  8. Dec 31, 2025

    VA reports 367 AI use cases in its 2025 inventory

    The Department of Veterans Affairs said its 2025 AI use case inventory contained 367 AI onboarding examples, up from 227 in 2024. Many of the listed uses were tied to medical devices and clinical-care augmentation.

  9. Apr 1, 2025

    GSA launches OneGov technology purchasing initiative

    The General Services Administration launched the OneGov initiative to help federal agencies buy discounted private-sector technology, including AI tools. The program was later cited as part of broader federal efforts to accelerate practical AI adoption.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

AI Adoption and Governance Concerns Amid Emerging Agentic-AI Security Risks

AI Adoption and Governance Concerns Amid Emerging Agentic-AI Security Risks

Organizations are accelerating adoption of **generative and agentic AI**, but reporting indicates governance, data readiness, and workforce skills are lagging. A survey of chief data officers cited widespread use of genAI in large enterprises and growing plans to increase **data management** investment, while also flagging that visibility and governance have not kept pace with expanding AI usage and that many employees need upskilling in **data** and **AI literacy** to use AI outputs responsibly. Separately, commentary and reporting highlighted a widening set of AI-related security and societal risks, including concerns about **deepfakes**, privacy, and opaque model behavior, alongside claims of real-world exploitation activity targeting AI-adjacent developer workflows (for example, token theft via compromised automation such as GitHub Actions) and discussion of vulnerabilities affecting AI tooling and agent communication patterns. Other items in the set were primarily newsletter/personal updates or vendor-style announcements and did not provide a single, verifiable incident narrative beyond general AI-and-security trend coverage.

1 months ago
Policy and industry debate over AI safety, governance, and data protection

Policy and industry debate over AI safety, governance, and data protection

U.S. policymakers and industry leaders are escalating scrutiny of **AI safety and data protection**, with a particular focus on sensitive data flows and the adequacy of existing guardrails. In a Senate HELP Committee hearing, lawmakers questioned whether federal guardrails are needed to protect Americans’ healthcare data voluntarily uploaded to AI-enabled apps and wearables that may fall outside HIPAA coverage, raising concerns about liability, downstream data use, and integration into medical records; HHS noted it is collecting public input via a request for information on safe and effective AI deployment in healthcare. Separately, commentary on AI governance and safety argues competitive pressure among frontier AI labs can erode safety practices and that clearer antitrust guidance could enable cross-industry collaboration on safety standards without triggering enforcement risk. Tensions over AI “red lines” in national security use also became more public, as **Anthropic** CEO Dario Amodei accused **OpenAI** of misleading messaging about defense work amid reports that Anthropic’s DoD talks faltered over restrictions related to mass domestic surveillance and autonomous weapons, while OpenAI described its agreement as permitting “all lawful purposes” alongside stated prohibitions. Broader, non-incident reporting highlighted enterprise investment to support *agentic AI* (with many data leaders citing governance lagging AI adoption) and general concerns about deepfakes, opaque models, and societal risk; however, several items in the set were primarily newsletters, vendor/industry promotion, or general-interest AI commentary rather than a single, discrete cybersecurity incident or vulnerability disclosure.

1 months ago
AI Adoption and Agentic AI Features Raise Security and Governance Concerns

AI Adoption and Agentic AI Features Raise Security and Governance Concerns

U.S. public-sector and industry reporting highlighted that **security confidence and workforce constraints** are emerging as major blockers to scaling artificial intelligence. A survey commissioned by *Google Public Sector* found most federal respondents are already using or planning to use AI, but only a small minority report completed AI adoption plans; respondents cited declining confidence in their agencies’ digital security posture, legacy technology exposure, procurement friction, and skills shortages as key impediments to moving beyond pilots. Separately, *Anthropic* introduced a research-preview “agentic” capability, **Cowork for Claude**, built on *Claude Code*, which can execute multi-step tasks with access to local folders and optional connectors (including browser-based workflows). Anthropic warned that ambiguous instructions or misinterpretation could result in **potentially destructive actions** (e.g., deleting local files) despite confirmation prompts for “significant actions,” underscoring the need for tighter controls when granting AI tools operational access. Other items in the set focused on broader AI discourse and geopolitics—Nvidia CEO Jensen Huang disputing “god AI” narratives and a Lawfare analysis of China’s AI capacity-building diplomacy—rather than specific cybersecurity events or actionable security findings.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.