Last week, we saw two major headlines touting a future where AI will be safe and secure. The first was Apple’s announcement that they will voluntarily comply with President Biden’s executive order on safe AI. A few days later, the European Union passed the long awaited Artificial Intelligence Act, enacting broad AI regulations. Let’s take a deeper look at these announcements, and at the differences between how the US and EU are approaching the AI revolution, with all of its risks and benefits.
The Apple decision comes roughly a month after their beta release of Apple Intelligence, a new suite of AI tools that operate across the entire Apple laptop, tablet and smartphone ecosystem. Better late than never, Apple joins major tech peers Microsoft, Google, Meta and Amazon that previously announced voluntary compliance with last October’s Executive Order. A number of the rising star AI companies, including Open AI, Anthropic and Inflection have also stated intent to comply.
Apple Intelligence is being marketed for its security
The AI tools are touted to be lightweight enough to run entirely at the edge – simply meaning on the phone or laptop itself. For more complex queries, where cloud computing is needed, Apple claims that user queries are restricted only to Apple private cloud servers, and therefore user privacy and data security are safely maintained and encrypted. From their website, “Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.”
Bold claims. Keeping all Apple user data “on premises” on Apple servers does provide a level of security from the world outside Apple. But what is not so boldly claimed is that it does little to protect your data from Apple itself. For years, the big tech companies have been building suites of products for just this reason – to get all of your data onto their proprietary servers.
If Apple, for example, knows more about you than Microsoft or Google, then they can provide you better (AI customized) services, further strengthening their grip on keeping you on their platform. Have you ever tried to leave an Apple service ecosystem? It is painfully hard. AI will make customer lock-in even stronger.
What does it mean that the tech giants are voluntarily complying with Biden’s Executive Order? The truth is – it doesn’t mean much. First let’s consider what the EO covers.
● It encourages the achievement of consensus industry standards without explicitly recommending any. It’s pretty easy for industry to commit to having a seat at the table for future discussion.
● It emphasizes safety, security and ethical considerations without actually proposing or defining specific regulations in these areas. Companies are getting great PR complying with empty demands.
● Relies on federal agencies to establish their own standards, relevant to use cases in their respective domains. The good news is the EO gives agencies procurement and grantmaking influence to steer funds to organizations that demonstrate good behavior. Will economic incentives be enough?
● Doesn’t imply any explicit penalties for non-compliance, instead relying on corporate self-governance. The fox is guarding the henhouse.
● Advocates for clarifying copyright and intellectual property boundaries for AI training and development without actually imposing any requirements. We trust that industry will disclose if they are creating derivative works based on proprietary content. We see artists already adding “poison” to their new works to catch AI malfeasance, with early indications that few organizations are behaving ethically. Whether this is deliberate or accidental as companies figure out how to sort protected from unprotected data is an open question.
Generally speaking, the US Executive Order is a starting point, simply to open a conversation that is too politically charged (even though it has nearly zero to do with politics) to take concrete action on at this time. Big companies are getting quite a bit of public-relations benefit without really committing to any behavior change, or risk of non-compliance. Everything is voluntary and nothing is required. It is little wonder to me that Apple waited until now to announce their plan to comply. It fits really well with a security-focused marketing campaign around their latest new AI product offering.
So how about the EU? Are they any better?
The answer is that the AI Act is significantly stronger. While there is still a long way to go, the AI Act has a number of strong initial provisions. It remains to be seen how well these will stand up to certain future legal challenges. Here are the key points of the AI Act.
● Establishes a comprehensive regulatory framework across all EU member states and all market domains. This should lead to regulatory consistency, which is a hugely important benefit of the EU approach. Standardization tends to accelerate industries in a positive manner (especially for small businesses) and to reduce entry-points for special-interest lobbying. The US distributed approach across many agencies is likely to lead to highly fragmented and industry-specific future regulations (which invariably lead to loopholes).
● Establishes AI risk level categories – unacceptable, high and minimal/no risk – to mitigate risks to health, safety and fundamental rights. Regulations are properly apportioned across the risk levels, creating more clarity for industry and regulators alike. High risk applications are strictly regulated.
● Bans the use of remote biometric identification in public spaces. This clause clearly prioritizes public interests ahead of corporate interests. It is a major step forward in protecting individual rights and avoiding a government or corporate Surveillance State (a topic I wrote about lately, in part 1 and part 2 and part 3).
● Establishes a clear legal framework, with heavy penalties for non-compliance. Unlike the US approach, the EU is dictating terms industry. In the US, we are letting companies lead the way and trusting them to behave.
● Requires companies to fully disclose all protected works and data used in AI training. This opens the door to legal dispute and financial penalties for unauthorized use of copyrighted material.
The US and EU approaches have a few things in common
● Emphasize the importance of continuous and rigorous testing, both before and after AI is deployed.
● Clear understanding of the need for “security by design” in all cybersystems. Cybersecurity must be integrated with AI from the start.
The differences far outweigh the similarities
In a nutshell, the EU continues to be the global leader in protecting consumer privacy. The culture set by the General Data Protection Rights Act back in 2016 (and fully enforceable by 2018) is continued and expanded in the new Artificial Intelligence Act.
Meanwhile, in the US we are giving free reign to companies to self-police themselves. This is great for large tech companies that have outsized influence (lobbying) over future regulations, tailored to their own products and preferences. This puts smaller companies at risk, building tools that they can only hope will survive a fragmented regulatory landscape.
So what happens next?
In the absence of federal regulation I expect we will see individual states create their own state-by-state regulations to fill an obvious regulatory gap. This makes it complicated for small companies to develop new products that are consistent wherever they are used. A large company may be able to withstand an occasional legal challenge across the US. But small companies rarely can.
The best bet for anyone wanting to compete internationally is to simply follow Europe’s lead, developing to the EU regulations. You can get away with a lot, and perhaps get to market more easily in the US. But by doing so, most companies are putting their profits far ahead of consumer’s privacy – even as they tout how well they follow the Executive Order.
The post Tom Snyder: Who should regulate AI — the government or big tech? first appeared on WRAL TechWire.