Many online publishers use AdSense or Yahoo!'s Publisher Network to associate adverts with keywords contained in articles. Such systems can match articles and adverts in a way that readers are likely to find offensive, though. For example, an article about sexual abuse might contain words which trigger adverts for sex-related products.
The Advertising Standards Authority has told OUT-LAW.COM that a publisher bears responsibility for an ad's suitability regardless of how the ad was chosen for a particular slot.
"If ads are generated automatically there should be checks in place, people need to be responsible for the ads they show and they need to meet the requirements of the Code," an ASA spokesman said.
The ASA has declined to investigate an advert that appeared in The Guardian attached to a feature about the resignation of former Pakistani President Pervez Musharraf. The advert read "Pakistan Girls in Photos at Great Prices" and the reader complained that the inclusion of that ad was tasteless and inappropriate.
How the ad appeared:
"The ASA can't intervene simply because the automatic search displays a particular ad," said the ASA's response to the complaint. "This is because our Codes can only be applied to the content of the ad itself and not to the decision to show it."
The ASA spokesman said, though, that publishers were responsible for ads and that if an ad was offensive, the editorial context would play a part in the ASA's adjudication of the issue.
"The context is key – we take into account taste and decency and do regulate the content of ads and will take the context into account," said the spokesman. "If ads for a sex site appeared in the context of a story on child abuse then it would be a breach of the Code, but then some ads might be inappropriate in any context. It is to be decided on a case by case basis."
A Google spokesman said that publishers do have control over the adverts that appear beside a story and can stop particular adverts appearing. He also said that Google itself carries out some work to avoid insensitive matches.
"Google automatically uses technology to stop ads appearing around sensitive issues, such as a plane crash," said the spokesman. "Technology recognises sensitive topics and does not serve ads which might seem insensitive."
Google's AdSense programme has an 'Ad Review Center', which allows control of ads, Google said. "You can allow or block individual ad groups and advertisers, as well as filter ads by type: text or image," said Google's guidance on using the Ad Review Center.
The Guardian said that a complaint sent directly to it had received no response because it had been sent to the wrong email address. A spokeswoman said that it made efforts to stop ads from being published in contexts that were offensive, but that it still received complaints.
"We began working with Google at the end of July and at the outset we blocked obviously inappropriate advertising," said the spokeswoman. "Since then we estimate that out of half a billion contextual ads served each month we have had about 20 inappropriate matches. These can arise when an advertiser mislabels their ads and we are taking steps to deal with this."
"Both Google and the Guardian are keen to make this partnership work. It is in no-one's interest to place inappropriate ads beside quality content," she said.
The Guardian's head of digital content recently told its readers' editor that: "contextualised advertising in general is a good thing because it's not intrusive and it can be useful to readers, but it's a blunt instrument …for news stories more fine tuning has to be done. These ads are completely automated, and we don't want people to think we have selected them, or specifically approved ads to appear in a slot, if they are mismatched and inappropriate," she said.