Please select your state
so that we can show you the most relevant content.
Washington, D.C., is buzzing about Section 230 — internet regulations within the Communications Decency Act.
Lawmakers on both sides of the aisle have proposed reforming Section 230 to give government a greater hand in controlling online speech. That dangerous path could lead to free-speech infringements that would change the internet as we know it for the worse.
Congress should not police online speech. But If lawmakers want to rewrite the law, they should not regulate free expression and stifle innovation. Instead they should consider these seven principles:
Section 230, as is, allows for the law to hold content creators liable for the content they create. The law restricts who can be liable for the harmful content created by others.
So, if lawmakers don’t think Section 230 does enough to deter bad actors online, they should invest more in enforcement of existing laws, and identify and remove obstacles to enforcement.
Lawmakers should avoid imposing criminal liability on websites and internet service providers for unlawful user-generated content. That could have a chilling effect on American innovation and could result in harmful censorship.
The First Amendment prohibits the government from banning protected speech. Because of this, new regulations shouldn’t allow the government to force internet service providers to ban protected speech.
That would just be an indirect way for the government to violate the First Amendment.
But just because speech is constitutionally protected doesn’t always mean it’s appropriate. Laws should not discourage websites from moderating content.
Website managers should have the ability to remove any instances of harassment, pornography, racial slurs, and other offensive content. This way, they can ensure their forums promote civil discourse.
Publishing third-party content online can never be “neutral.” Even an “objective” newsfeed that shows the latest stories at the top has a recency bias. To mandate neutrality is setting an unreachable standard.
Instead, laws should protect websites’ ability to prioritize and deprioritize content, as well as their ability to remove harmful content.
Imagine if internet speech rules varied from state to state. Because people across the country use the same websites and service providers, a patchwork of laws would be like a minefield and nearly impossible to navigate. We need a uniform legal standard that applies to everyone.
It’s difficult for internet companies, especially small ones, to develop new products and services if they’re always afraid of breaking the law.
Regulations must protect these entities, not merely from liability, but from having to defend against excessive, often-meritless suits. If a start-up with a revolutionary idea has to spend all its resources fighting frivolous lawsuits, that new product might never reach the general public.
While websites — such as Facebook — interact with users directly, internet service providers — such as Verizon — do not. This means while websites can take down individual pieces of harmful content and leave other content untouched, services providers would have to shut down entire websites, damaging content that is not harmful.
When lawmakers write legislation, they must keep this difference in mind. Internet regulations should apply to both websites and service providers. But speech rules cannot be so narrow that they negatively affect inoffensive content.