BG elementBG element
Image Courtesy of Christian Lue

Why This Matters for North American Companies

The EU AI Act, adopted in 2024, is the most comprehensive artificial intelligence regulation to date. It applies to AI systems placed on the EU market or whose outputs are used in the EU, with extra-territorial reach similar to the GDPR. Even if your company operates entirely in North America, the Act can still influence your operations because the AI tools you use are often provided by vendors that serve the EU market.

When your vendors comply with EU rules, they frequently apply changes globally — altering features, policies, and terms of service for all customers. This means that, starting in 2025, North American businesses may experience functional and contractual changes to AI tools they rely on, even if they have no direct EU presence.

Key EU AI Act Provisions Effective 2025

February 2025
Two significant provisions took effect in February. The first is the ban on “unacceptable-risk” AI systems, which include tools that manipulate human behaviour to cause harm, exploit vulnerable groups such as children, conduct social scoring by governments, and certain biometric or emotion-recognition systems in sensitive contexts like workplaces. Although these prohibitions apply only within the EU, many vendors have disabled such features globally to simplify compliance.

The second provision is the requirement for AI literacy in organizations that provide or use AI in the EU. While North American businesses without EU operations are not directly covered, many will encounter more training resources, in-product guidance, and prompts for responsible use as vendors align globally with these literacy expectations.

August 2025
In August, new obligations took effect for providers of general-purpose AI (GPAI) and foundation models — the large-scale AI systems that underpin many commercial tools. These models include:

  • Language models such as GPT, Claude, or Gemini, which power writing assistants, chatbots, and meeting summarizers.

  • Image generation models such as DALL·E, Midjourney, or Stable Diffusion, which create visuals from text prompts.

  • Multimodal models such as GPT-4o or Gemini Pro, which can process and combine text, images, and audio.

Providers selling these models into the EU must now prepare technical documentation, publish summaries of training data, comply with EU copyright law, and, if their model is deemed to pose “systemic risk,” implement additional safety measures including cybersecurity protections and incident reporting.

For North American business users, these changes often translate into adjustments to the tools they already use. Vendors may alter product features, introduce transparency labels or watermarking for AI-generated content, or restrict certain capabilities — even outside the EU — to maintain a consistent global compliance posture.

The August 2025 date also marked the operational launch of the European AI Office, the European AI Board, and national authorities across EU Member States. While their current focus is on AI providers, their oversight will indirectly affect business users through vendor-driven contractual changes, policy updates, or feature modifications.

Impact on North American Businesses That Use AI Tools 

For North American companies that do not build or sell AI systems and have no presence in the EU, the EU AI Act’s August 2025 provisions still have practical implications. The reason is simple: your AI vendors may be covered, and their compliance actions can affect your tools, contracts, and workflows.

A marketing platform you subscribe to may now watermark AI-generated images by default. An HR software tool may disable certain candidate-screening features globally to avoid prohibited uses in the EU. A productivity suite powered by a foundation model may introduce new content provenance features or modify prompt restrictions for all customers. These are vendor-led changes, but they alter your user experience, workflows, and in some cases, contractual obligations.

Even without EU operations, these shifts can affect how you serve your clients, especially if your work indirectly reaches EU markets through partners, supply chains, or content distribution. Vendors may also require you to accept updated terms of service, new data-handling agreements, or attestations about responsible use.

The most important step for 2025 is maintaining visibility over your AI toolset and vendor relationships. Know which tools in your environment are powered by foundation models, monitor vendor communications for product or policy changes, and consider whether these adjustments align with your operational needs and client commitments. While direct compliance obligations for North American business users without EU operations are minimal this year, early awareness will prepare you for the wider obligations arriving in August 2026.

Strategic Steps for 2025

Here are the key strategic steps North American businesses can take in 2025 to anticipate and adapt to vendor-driven changes resulting from the EU AI Act.

  1. Maintain a Detailed AI Tool Inventory – Identify all AI-enabled platforms in use, especially those powered by foundation models. Document their role in workflows, note any dependencies in client deliverables or internal operations, and use this visibility to anticipate where vendor-driven changes could have the greatest impact.

  2. Review Vendor Communications Regularly – Create a structured process for monitoring product updates, policy revisions, contractual changes, and shifts in data-handling practices. Assign responsibility for assessing the implications of these changes for compliance, operational continuity, and client relationships.

  3. Prepare for Client Inquiries – Develop clear, consistent messaging for clients—particularly those in regulated industries or with EU connections—explaining how the AI tools you use meet transparency, safety, and governance expectations. This preparation can help maintain trust and streamline client interactions.

  4. Avoid Reliance on Prohibited Features – Even without EU legal obligations, proactively removing high-risk features such as certain biometric recognition or manipulative AI functions from your workflows reduces the risk of disruption if vendors disable these capabilities or global regulations expand.

  5. Implement AI Training Programs – Build AI literacy across leadership and operational teams so they can understand the capabilities and limitations of AI tools, recognize compliance considerations, and adapt quickly to product changes.

  6. Establish an AI Governance Framework – Create clear policies covering vendor selection, acceptable use, documentation requirements, and escalation procedures for compliance concerns. This framework provides consistent oversight and resilience in the face of evolving regulations.

We help North American businesses strengthen AI literacy, improve governance, and adapt to global regulatory changes before they become operational risks. Our AI training and AI governance advisory services equip your teams to navigate AI changes with confidence. Contact us to prepare your organization for the next phase of AI regulation.