The IT Law Wiki

Content filter

32,081pages on
this wiki
Add New Page
Add New Page Talk0

Definition Edit

A content filter monitors Web and messaging applications for inappropriate content, spam, intellectual property breach,[1] non-compliance with an organization’s security policies, and banned file types.

Overview Edit

The filters can help to keep illegal material out of an organization’s systems, reduce network traffic from spam, and stop various forms of cyber attacks. They can also keep track of which users are browsing the Web, when, where, and for how long.

There are three main types of content filters: (1) Web filters, which screen and exclude from access or availability Web pages that are deemed objectionable or non-business related; (2) messaging filters, which screen messaging applications such as e-mail, instant messaging, short message service, and peer-to-peer service for spam or other objectionable content; and (3) Web integrity filters, which ensure the integrity of an entity’s Web pages.

Content filters have significant rates of both erroneously accepting objectionable sites and of blocking sites that are not objectionable. If implemented correctly, filtering can reduce the volume of unsolicited and undesired e-mails. However, it is not completely accurate, and legitimate messages might get blocked. Also, some content filters do not work with all operating systems.

References Edit

  1. An intellectual property breach can include client information, trade secrets, ongoing research, and other such information that has been released without authorization.

See also Edit

Also on Fandom

Random Wiki