In the NY Times, Adam Seth Litwin says the new Writers Guild of America contract may provide a template for other workers threatened by artificial intelligence:
The W.G.A. contract establishes a precedent that an employer’s use of A.I. can be a central subject of bargaining. It further establishes the precedent that workers can and should have a say in when and how they use artificial intelligence at work.
It may come as a surprise to some that the W.G.A. apparently never wanted, nor sought, an outright ban on the use of tools like ChatGPT. Instead, it aimed for a more important assurance: that if A.I. raises writers’ productivity or the quality of their output, guild members should snare an equitable share of the performance gains. And the W.G.A. got it.
How did it achieve this? In this case, the parties agreed that A.I. is not a writer. The studios cannot use A.I. in place of a credited and paid guild member. Studios can rely on A.I. to generate a first draft, but the writers to whom they deliver it get the credit. These writers receive the same minimum pay they would have had they written the piece from scratch. Likewise, writers can elect to use A.I. on their own, when a studio allows it. However, no studio can require a guild member to use A.I.
These negotiations were likely aided by the fact that studios have their own concerns about A.I. Material generated by A.I. cannot be copyrighted, which presents several challenges to studios, since they typically own the copyright on material from writers they’ve hired. By assuring that human writers will be involved in any writing that also involves A.I., the studios start to insulate themselves from those copyright concerns.
No comments:
Post a Comment