Twenty Five Years of DMCA

One of the most disliked but necessary laws in the broadband world has been DMCA (Digital Millenium Copyright Act). This law was passed in 1999 to put the U.S. in compliance with the worldwide World Intellectual Property Treaty.

The DMCA law is one of the major reasons that companies felt safe in creating social media platforms. Section 512 of the law is the one that web policy folks know about. This part of the law creates a safe way for a website to host user-generated content. The law says that a website is safe from copyright infringement suits as long as it registers with the U.S. Patent Office as a DMCA agent and agrees to pull down any content that a copyright owner finds to be in violation of its copyright.

This one provision freed websites to start hosting user-generated content. The early fear was that copyright infringement complaints and lawsuits could paralyze and bankrupt anybody that openly allowed the public to post anything they wanted. The DMCA rules mean that a website owner doesn’t have to monitor content for copyright infringements (something nobody knows how to do) and instead puts the onus on the copyright holder to ask to have the offending content removed.

There are a lot of problems caused by DMCA. One of the biggest complaints about DMCA is that websites typically remove anything upon request, including a lot of content that is not infringing on copyright laws. This has allowed unscrupulous people to use the DMCA system as a way to censor content they don’t like.

But a bigger complaint by First Amendment advocates is that the pressure created by DMCA has led many website owners to censor their own sites to cut down on the number of DMCA takedown requests. It’s expensive for a national website to have enough people on hand to react to huge volumes of takedown requests. Website owners often try to reduce DMCA notices by screening and blocking content that they think will be in violation. That’s a slippery slope from a First Amendment perspective because any censoring software is going to make mistakes and block legitimate content.

Another big problem is that the DMCA process is not instantaneous. People can post content that they know is in violation with the knowledge that the content will be available long enough to be useful. For example, there are people who routinely post a pirated version of live sporting events since they know that any DMCA takedown won’t happen until after the event is over.

Another outcome of DMCA rules is that an entire industry of companies has been created to monitor content on websites and issue DMCA takedown requests. Some of these companies also routinely sue website owners that they say are out of compliance – and they are accused of doing so to negotiate monetary settlements rather than to deal with content issues. These efforts are a major cost and annoyance for large web companies, but can be devastating for any startups or small web companies.

There have been attempts in recent years to pass laws that are harsher and more difficult to comply with than DMCA. For example, the Kids Online Safety Act (KOSA) that is currently being considered would require websites to take steps to always block harmful content from getting to children. This would not only be things like pornography, which many websites already block but would include other content that is identified as harmful by parents. It’s one thing to ask a website owner to comply with copyright laws, but it’s something entirely different to ask website owners to somehow identify and block an ever-changing list of content that is deemed to be offensive.

7 thoughts on “Twenty Five Years of DMCA

  1. I think a lot of the anti-“censorship” First Amendment implications for private companies are a pretty hard stretch.

    And, there are a lot of non-DMCA-related reasons why internet companies filter or prioritize certain kinds of content — similar to pornography, they’ve got a huge commercial incentive to avoid various kinds of content they expect their audience would find offensive or highly controversial.

    They’re private companies and to some large extent they’re being paid (usually with ad dollars) to curate what the users see that’s anticipated to be useful to the users. Those are not examples of government policies abridging freedom of speech, they’re commercial.

    It’s just convenient if you’re the one being deprioritized or removed to scream “First Amendment violation!” especially if the deprioritization is also depriving you of ad dollars.

      • It strikes me as a stretch, although, as Doug indicates, there’s a bunch of complaining about “censorship” mostly by conservatives.

        Since tech companies are typically populated (although not necessarily owned it turns out) by liberals, there’s a feeling that content must be getting censored to suppress conservative voices. There are controversial middle-grounds where one person’s disinformation about covid is another person’s critical red-pilling debunk of the mainstream media narrative.

        Some people seem to think you should be able to say just whatever the hell you want and that’s the definition of free speech. In reality, the definition is that the government is constrained from making laws that prevent citizen’s speech or insist on making them say things the government wants. And, there are a bunch of exceptions for things like yelling “fire” in a theater or inciting insurrections.

        And… if the DMCA which is a law intending to protect copyrights indirectly causes otherwise legitimate speech from being removed, then the corporations are violating free speech constraints. I think that’s a total stretch.

        In reality, extreme right-wingers and some extreme left-wingers have a variety of opinions they feel strongly about that the various internet media companies don’t want to publish because they feel they violate “community standards” meaning it’s worsening the experience of their users (who may have to look at Nazis or whatever). The content producers are pitching a fit claiming free speech violations. Being “deplatformed” or having your content taken down means you aren’t making ad revenue or getting to forward your political career, so it means money to the content owners.

        But, no, IMO these are private companies and it’s entirely their business what they choose to publish or not.

      • Good points. I think the problem is some don’t recall from (or slept through) high school civics or government classes that the First Amendment was drafted in the context of government, not private actors.

      • In the context of AI chatbots, this has been referred to as hallucinating — when large language models pull together stuff out of context leading to inaccurate responses to prompts.

      • Recent embarrassing situation where a sitting member of the FCC argued the First Amendment applies to social media. The instructor of that person’s civics or government class would write a red ink “see me” note on their exam paper with that response. 🙂

Leave a Reply