As the US figures out what a national AI policy should look like, states are simultaneously passing laws that range from hiring and employment rules to “frontier model” safety requirements. This piece argues that the real question isn’t whether AI is important (it is), but which parts of the stack states can regulate without running into constitutional limits—especially where state rules spill across borders or collide with federal authority over interstate commerce.
The article frames state “police powers” as broad but bounded: AI rules need a clear connection to local welfare, should be geographically targeted to a state’s residents, and must respect constitutional rights like free speech. It suggests that laws focused on in-state use cases (like employment screening or disclosure requirements for political deepfakes) are on firmer ground than laws that effectively regulate model development, training, or pre-deployment practices that apply nationwide. With hundreds of state AI bills already in play, the author expects coming litigation to clarify what survives.