AI Executive Order: What businesses need to know

November 13, 2023
Cyrus Tehrani
4 min read

The AI Executive Order, signed by President Biden on October 30, 2023, marks a significant shift in the United States’ approach to Artificial Intelligence (AI). As AI gains popularity, especially in the enterprise, the order is bound to have profound implications for the private sector.

The goal of the order is to promote a safe, secure, and trustworthy development and use of AI so that:

  • Consumers are protected
  • AI risk is reduced
  • Companies can keep up with innovation

The executive order, while extensive, can be broken down into three key mandates:

  • AI governance
  • Promoting responsible AI innovation
  • Managing the consequential risks

The order encourages AI-related standards to create a common ground moving forward on responsible AI usage to boost AI governance. These standards are expected to foster transparency, enhance interoperability, and ensure that AI technologies are secure, reliable, and robust.

Next, the order underscores the importance of advanced, responsible AI innovation. It encourages the private sector to invest more in AI research and development so that ethical considerations and societal values guide the next wave of innovation.

Finally, the order acknowledges the potential risks involved with AI usage, and calls for modernizing regulatory frameworks to address these risks. The concern here is that AI technologies must not compromise individual privacy rights, national security, or societal norms.

Apart from the three key mandates, the order also addresses the growing concern over AI-generated deepfakes, and mandates the development of best practices for detecting such content. It includes provisions to ensure AI improves workers’ lives and does not infringe on their rights.

Ultimately, the order paints a comprehensive vision for AI in the United States — grounded in responsibility, innovation, and risk mitigation.

What the AI order means for businesses

For businesses adopting AI in any form, the implications of this executive order are multifaceted.

The most common concerns include:

  • Adhering to standards
  • Boosting data privacy and security
  • Investing in R&D
  • Enhancing the workforce


Businesses, especially those developing AI technologies, will need to align with the standards set by the National Institute of Standards and Technology (NIST). This may involve adopting new methodologies and technologies to ensure compliance, and companies may face an overhaul in their current AI systems to meet the new federal guidelines. Impact on operational processes and the potential to incur additional costs are important to note here.

Data Privacy and Security

The executive order emphasizes the protection of personal and sensitive data. As such, businesses must strengthen their data privacy measures to ensure that AI systems are designed to protect user data. Plus, companies will have to invest in robust risk management strategies to identify, manage and remediate potential data breaches or misuse of AI.


Businesses are encouraged to invest in AI research and development, which could present new opportunities for innovation and competitive advantage. However, this may require a substantial financial investment and an increased focus on ethical considerations in AI development. While the order aims to foster innovation, the new regulations might also be seen as a roadblock. Like anything in cybersecurity, balancing innovation with compliance will be a key business challenge.


With the AI order, a need arises for upskilling the workforce to understand and comply with the new AI regulations, which may include training in ethical AI development, data privacy, and security. The order may lead to the creation of new roles focused on AI governance, ethics, and compliance, transforming the job landscape within tech companies. It’s also important to note that businesses will need to invest in developing AI models that are fair and unbiased, which may require new training datasets and algorithms.

How Concentric AI can help companies navigate the AI Order

At the time of this writing, the AI order is very new; until some time has passed, the implications are difficult to quantify. But if a business is using AI in any way, they must acknowledge the privacy and security concerns AI brings to their organization.

Before deploying any AI, an organization must have a clear understanding of data risk — especially when it comes to adhering to standards and data privacy.

Concentric AI leverages our own sophisticated natural language processing capabilities to accurately and autonomously categorize data output from generative AI into categories that include privacy-sensitive data, intellectual property, financial information, legal agreements, human resources files, sales strategies, partnership plans and other business-critical information.

Concentric AI can analyze the output from any type of AI to discover sensitive information – from financial data to PII/PCI/PHI — and label the data accordingly to ensure that only authorized personnel have access to it. Adopting these least privilege principles are crucial for adherence to standards and privacy regulations

Best of all, employees don’t have to worry about labeling the output, which means better security for everyone.

Once data has been identified and classified, Concentric AI can autonomously identify risk from inappropriate permissioning, risky sharing, unauthorized access, wrong location etc.

Remediation actions — such as changing entitlements, adjusting access controls, or preventing the data from being shared — can also be taken centrally to fix issues and prevent data loss.

Book a demo today with our team of AI experts to find out how Concentric AI can help your organization manage AI, boost privacy efforts and reduce risk.


Libero nibh at ultrices torquent litora dictum porta info [email protected]

Getting started is easy

Start connecting your payment with Switch App.