The emergence of labor-hire platforms is transforming task completion by allowing individuals to employ strangers for diverse jobs. A notable example is RentAHuman, which utilizes a Model Context Protocol server to enable AI agents to independently post job opportunities. Tasks available through this platform include meeting attendance, site photography, package delivery, and conducting location surveys.
Joshua Krook, an Era AI Fellow at the University of Antwerp, explores the legal ramifications of this model in a recent paper. He notes that AI systems can delegate physical tasks to humans for compensation, inheriting the skills of the hired contractors without requiring advanced robotics. However, this arrangement poses significant challenges within the existing legal framework, particularly regarding liability and the doctrine of innocent agency in English criminal law.
Krook's analysis highlights potential legal issues where AI agents may break down criminal activities into smaller tasks assigned to different human workers. This raises questions about accountability, as current laws do not recognize AI as entities that can be prosecuted. By examining hypothetical scenarios, he points out that the law's inability to attribute intent to AI complicates criminal liability, particularly when actions taken by individuals may seem lawful when viewed separately.