The tale of Anthropic and the leak of the source code for its leading AI coding assistant Clause should be a cautionary tale for those in the software space.
Earlier this week, Anthropic announced that a portion of Claude’s underlying source code, the secret sauce to its market leading AI assistant, had been inadvertently disclosed – apparently as a result of human error. Whilst the company has sought to assure the market that sensitive customer data and credentials have not been compromised, the leak appears to have had far reaching consequences. Portions of the code have reportedly been translated into alternative coding languages (by AI) and widely disseminated online.
Whatever the cause of the leak, it is an important reminder to all organisations to ensure that they have robust guardrails in place to mitigate the risk of unauthorised or inadvertent disclosure of sensitive information, including confidentiality policies tailored the specifics of their business. As the Claude example illustrates, once confidential information has entered the public domain, it can be very difficult if not impossible to undo the damage.
This example also raises fascinating questions from an IP standpoint, including the extent to which the translation of source code into an alternative coding language constitutes copyright infringement and relatedly whether the resultant code itself constitutes a separate copyright work. The legal position becomes even more complex for AI‑generated code as copyright has generally been considered to require human authorship (for example, a US court has previously held that a monkey who took a ‘selfie’ cannot obtain copyright in the photo).
From a practical standpoint, these questions directly impact how effective traditional IP enforcement strategies, such as takedown notices, may be in the context of software, and in particular AI-generated code.
If you would like to discuss any of these further, please don’t hesitate to get in contact with our team.